Peter Dayan and L.F. Abbott

Chapter 1 - Neural Encoding I: Firing Rates and Spike Statistics
        Properties of Neurons
        Recording Neuronal Responses
        From Stimulus to Response
      Spike Trains and Firing Rates
        Measuring Firing Rates
        Tuning Curves
        Spike-Count Variability
      What Makes a Neuron Fire?
        Describing the Stimulus
        The Spike-Triggered Average
        White-Noise Stimuli
        Multiple-Spike-Triggered Averages and Spike-Triggered Correlations
      Spike Train Statistics
        The Homogeneous Poisson Process
        The Spike-Train Autocorrelation Function
        The Inhomogeneous Poisson Process
        The Poisson Spike Generator
        Comparison with Data
      The Neural Code
        Independent-Spike, Independent Neuron and Correlation Codes
        Temporal Codes
      Chapter Summary
        A) The Power Spectrum of White Noise
        B) Moments of the Poisson Distribution
        D) Inhomogeneous Poisson Statistics
      Annotated Bibliography
Chapter 2 - Neural Encoding II: Reverse Correlation and Receptive Fields
      Estimating Firing Rates
        The Most Effective Stimulus
        Static Nonlinearities
      Introduction to the Early Visual System
        The Retinotopic Map
        Visual Stimuli
        The Nyquist Frequency
      Reverse Correlation Methods - Simple Cells
        Spatial Receptive Fields
        Temporal Receptive Fields
        Response of a Simple Cell to a Counterphase Grating
        Space-Time Receptive Fields
        Nonseparable Receptive Fields
        Static Nonlinearities - Simple Cells
      Static Nonlinearities - Complex Cells
      Receptive Fields in the Retina and LGN
      Constructing V1 Receptive Fields
      Chapter Summary
        A) The Optimal Kernel
        B) The Most Effective Stimulus
        C) Bussgang's Theorem
      Annotated Bibliography
Chapter 3 - Neural Decoding
      Encoding and Decoding
        ROC Curves
        ROC Analysis of Motion Discrimination
        The Likelihood Ratio Test
      Population Decoding
        Encoding and Decoding Direction
        Optimal Decoding Methods
        Fisher Information
          Optimal Discrimination
      Spike Train Decoding
      Chapter Summary
        A) The Neymann-Pearson Lemma
        B) The Cramér-Rao Bound
        C) The Optimal Spike-Decoding Filter
      Annotated Bibliography
Chapter 4 - Information Theory
      Entropy and Mutual Information
        Mutual Information
        Entropy and Mutual Information for Continuous Variables
      Information and Entropy Maximization
        Entropy Maximization for a Single Neuron
        Populations of Neurons
        Application to Retinal Ganglion Cell Receptive Fields
          The Whitening Filter
          Filtering Input Noise
        Temporal Processing in the LGN
        Cortical Coding
      Entropy and Information for Spike Trains
      Chapter Summary
                    Positivity of the Kulback-Leibler Divergence
      Annotated Bibliography
Chapter 5 - Model Neurons I: Neuroelectronics
      Levels of Neuron Modeling
      Electrical Properties of Neurons
                    Intracellular Resistance
                    Membrane Capacitance and Resistance
        Equilibrium and Reversal Potentials
                    The Membrane Current
      Single-Compartment Models
        Integrate-and-Fire Models
                Spike-Rate Adaptation and Refractoriness
      Voltage-Dependent Conductances
                    Persistent Conductances
                    Transient Conductances
                     Hyperpolarization-Activated Conductances
      The Hodgkin-Huxley Model
      Modeling Channels
      Synaptic Conductances
                      The Postsynaptic Conductance
                      Release Probability and Short-Term Plasticity
      Synapses on Integrate-and-Fire Neurons
                      Regular and Irregular Firing Modes
      Chapter Summary
                      A) Integrating the Membrane Potential
                      B) Integrating the Gating Variables
      Annotated Bibliography
Chapter 6 - Model Neurons II: Conductances and Morphology
      Levels of Neuron Modeling
      Conductance-Based Models
        The Connor-Stevens Model
        Postinhibitory Rebound and Bursting
      The Cable Equation
        Linear Cable Theory
                    An Infinite Cable
                    An Isolated Branching Node
        The Rall Model
        The Morphoelectrotonic Transform
      Multi-Compartment Models
        Action Potential Propagation Along an Unmyelinated Axon
        Propagation Along a Myelinated Axon
      Chapter Summary
        A) Gating Functions for Conductance-Based Models
                    Connor-Stevens Model
                    Transient Ca2+ Conductances
                    Ca2+-dependent K+ Condutances
        B) Integrating Multi-Compartment Models
      Annotated Bibliography
Chapter 7 - Network Models
      Firing-Rate Models
        Feedforward and Recurrent Networks
        Continuously Labelled Networks
      Feedforward Networks
        Neural Coordinate Transformations
      Recurrent Networks
                    Linear Recurrent Networks
                              Selective Amplification
                              Input Integration
                              Continuous Linear Recurrent Networks
                    Nonlinear Recurrent Networks
                             Nonlinear Amplification
                  A Recurrent Model of Simple Cells in Primary Visual Cortex
                  A Recurrent Model of Complex Cells in Primary Visual Cortex
                 Winner-Take-All Input Selection
                 Gain Modulation
                 Sustained Activity
                 Maximum Likelihood and Network Recoding
        Network Stability
        Associative Memory
      Excitatory-Inhibitory Networks
        Homogeneous Excitatory and Inhibitory Populations
                Phase-Plane Methods and Stability Analysis
        The Olfactory Bulb
        Oscillatory Amplification
      Stochastic Networks
      Chapter Summary
                  Lyapunov Function for the Boltzman Machine
      Annotated Bibliography
Chapter 8 - Plasticity and Learning
                  Stability and Competition
      Synaptic Plasticity Rules
                  The Basic Hebb Rule
                  The Covariance Rule
                  The BCM Rule
                  Synaptic Normalization
                          Subtractive Normalization
                          Multiplicative Normalization and the Oja Rule
                   Timing-Based Rules
       Unsupervised Learning
                    Single Postsynaptic Neuron
                             Principal Component Projection
                             Hebbian Development and Ocular Dominance
                             Hebbian Development of Orientation Selectivity
                             Temproal Hebbian Rules and Trace Learning
                     Multiple Postsynaptic Neurons
                              Fixed Linear Recurrent Connections
                              Competitive Hebbian Learning
                                         Feature-Based Models
                              Anti-Hebbian Modification
                              Timing-Based Plasticity and Prediction
      Supervised Learning
                   Supervised Hebbian Learning
                              Classification and the Perceptron
                              Function Approximation
                    Supervised Error-Correcting Rules
                              The Perceptron Learning Rule
        The Delta Rule
        Contrastive Hebbian Learning
      Chapter Summary
                  Convergence of the Perceptron Learning Rule
      Annotated Bibliography
Chapter 9 - Classical Conditioning and Reinforcement Learning
      Classical Conditioning
        Predicting Reward - The Rescola-Wagner Rule
        Predicting Reward Timing - Temporal-Difference Learning
        Dopamine and Prediction of Reward
      Static Action Choice
        The Indirect Actor
        The Direct Actor
      Sequential Action Choice
                    The Maze Task
          Policy Evaluation
          Policy Improvement
        Generalizations of Actor-Critic Learning
        Learning the Water Maze
      Chapter Summary
      Appendix - Markov Decision Problems
                  The Bellman Equation
                  Policy Iteration
      Annotated Bibliography
Chapter 10 - Representational Learning
      Density Estimation
      Factor Analysis
      Principal Components Analysis
      Sparse Coding
      Independent Components Analysis
      Multi-Resolution and Wavelet Models
      The Helmholtz Machine
      Chapter Summary
      Annotated Bibliography
Appendix - Mathematical Methods
Linear Algebra
Differential Equations
Probability Theory
Fourier Transforms
Electrical Circuits
The d Function
Lagrange Multipliers
Annotated Bibliography