Computational neuroscience Study Guide
Study Guide
📖 Core Concepts
Computational Neuroscience – Uses mathematics, computer simulation, and theory to explain how real, biologically plausible neurons and networks generate brain function.
Model Abstraction – Level of detail is chosen to match the research question:
Biophysical models capture membrane currents, ion‑channel kinetics, and dendritic morphology.
Abstract models (e.g., integrate‑and‑fire) capture population‑level phenomena such as memory or behavior.
Integrate‑and‑Fire (Lapicque, 1907) – A neuron integrates incoming currents until a voltage threshold is reached, then emits a spike and resets.
Hodgkin–Huxley (HH) Model – Describes the membrane potential \(V\) with differential equations for sodium (\(I{\text{Na}}\)) and potassium (\(I{\text{K}}\) ) currents (plus leak).
\[
Cm \frac{dV}{dt}= -\bigl( I{\text{Na}} + I{\text{K}} + I{\text{L}} \bigr) + I{\text{ext}}
\]
Efficient Coding (Barlow) – Sensory systems encode inputs using the fewest spikes possible while preserving information; manifests as center‑surround receptive fields, sparse firing, and decorrelated outputs.
Bayesian Inference – The brain combines prior expectations with noisy sensory evidence to form a posterior belief: \(\text{Posterior} \propto \text{Prior} \times \text{Likelihood}\).
Hebbian Learning & Hopfield Networks – Synaptic strengthening when pre‑ and post‑neurons fire together; Hopfield networks store patterns as attractor states (energy minima).
Mean‑Field Theory – Replaces many‑body interactions with an average “field” acting on each neuron, yielding tractable population‑rate equations.
Neuromorphic Computing – Silicon chips that implement physical analog neurons and synapses for low‑power, real‑time neural computation (e.g., SpiNNaker, BrainScaleS).
---
📌 Must Remember
Scope – Focuses on biologically realistic neurons, not abstract connectionist AI models.
Key Historical Milestones
Lapicque → integrate‑and‑fire (1907)
Hodgkin & Huxley → voltage‑clamp, first HH model (1952)
Rall → multicompartment cable theory (1960s)
HH Currents – Fast Na⁺ influx (depolarizing) and delayed K⁺ efflux (repolarizing) are the core drivers; many additional voltage‑gated currents exist in modern extensions.
Efficient Coding Goal – Minimize spike count while preserving the statistical structure of the stimulus.
Bayesian Optimality – Many perceptual tasks can be explained as approximate Bayesian inference.
Hopfield Capacity – Roughly \(0.14N\) random patterns can be stored in an \(N\)-neuron fully connected network.
Mean‑Field Approximation – Works best when networks are large, sparse, and connections are weakly correlated.
Neuromorphic Advantage – Orders of magnitude lower energy per spike compared with CPU/GPU simulation.
---
🔄 Key Processes
| Process | Steps (bullet form) |
|---------|---------------------|
| Building a Biophysical Neuron Model | 1. Obtain reconstructed morphology (axon/dendrites). <br>2. Assign membrane properties (capacitance \(Cm\), leak conductance). <br>3. Insert ion‑channel mechanisms (Na, K, Ca, H‑current, etc.) with kinetic parameters. <br>4. Set up extracellular ion concentrations (e.g., \([K^+]o\)). <br>5. Calibrate against experimental voltage‑clamp data (fit HH parameters). <br>6. Run simulations in NEURON/GENESIS and verify action‑potential shape, firing rate vs. current (F‑I curve). |
| Deriving an Integrate‑and‑Fire Approximation | 1. Start from HH equations. <br>2. Identify dominant time constants (fast Na activation, slower K). <br>3. Linearize subthreshold dynamics → membrane RC circuit: \(\taum \frac{dV}{dt}= -(V - E{L}) + Rm I{syn}\). <br>4. Define spike threshold \(V{th}\) and reset voltage \(V{reset}\). <br>5. Validate by comparing spike times to full HH model under identical inputs. |
| Applying Bayesian Perception | 1. Define prior distribution \(p(s)\) over stimulus \(s\). <br>2. Model sensory likelihood \(p(r|s)\) where \(r\) is noisy response. <br>3. Compute posterior: \(p(s|r) = \frac{p(r|s)p(s)}{p(r)}\). <br>4. Choose estimate (e.g., MAP or mean of posterior). <br>5. Relate posterior statistics to neural firing rates or population codes. |
| Running a Network Simulation in Brian | 1. Write Python script importing brian2. <br>2. Define neuron model equations (e.g., LIF). <br>3. Create neuron group, set parameters, and initial conditions. <br>4. Define synaptic connections and plasticity rules. <br>5. Set monitors (spike, state). <br>6. Execute run(duration) and analyze output. |
---
🔍 Key Comparisons
Biophysical vs. Abstract Models
Biophysical: detailed ion channels, dendritic trees, high computational cost.
Abstract: reduced equations (e.g., LIF), capture firing statistics, scalable to large networks.
Hodgkin–Huxley vs. Integrate‑and‑Fire
HH: continuous dynamics, explicit channel kinetics, accurate spike shape.
IF: threshold‑based spike generation, ignores channel dynamics, fast to simulate.
Hebbian Learning vs. Hopfield Networks
Hebbian: local weight update rule \(\Delta w{ij} \propto xi xj\).
Hopfield: global energy function, symmetric weights, retrieves stored patterns via dynamics.
Mean‑Field Theory vs. Pairwise Interaction Models
Mean‑Field: assumes each neuron feels the average activity of the population.
Pairwise: captures specific correlations between neuron pairs (Ising‑like).
Neuromorphic Hardware vs. Software Simulation
Neuromorphic: analog/specialized digital circuits, real‑time, low power, limited precision.
Software: flexible, high precision, slower, consumes more energy.
---
⚠️ Common Misunderstandings
“Computational neuroscience = AI” – It is not about deep learning architectures that ignore biological constraints.
All models are “realistic” – Abstract models sacrifice biophysical detail for tractability; realism is a continuum.
HH includes every ion current – Original HH had only Na⁺ and K⁺; modern extensions add many more (Ca²⁺, H‑current, etc.).
Efficient coding means “no spikes” – It means minimizing spikes while preserving essential information, often via sparse, decorrelated firing.
Bayesian inference implies conscious reasoning – The brain can approximate Bayesian updates through neural population codes without explicit computation.
Neuromorphic chips are just faster CPUs – They use fundamentally different hardware (analog neurons, event‑driven communication) and are optimized for low‑power, real‑time operation.
---
🧠 Mental Models / Intuition
Neuron as an RC Circuit – Membrane behaves like a resistor (leak) and capacitor (capacitance); input current charges the capacitor until threshold is reached.
Ion Channels as Gates – Think of Na⁺ channels as “fast opening doors” that let charge rush in; K⁺ channels are “slow closing doors” that let charge out.
Efficient Coding = Data Compression – The brain removes redundancy (e.g., center‑surround filtering) just like JPEG removes predictable pixel patterns.
Bayesian Updating = Belief Revision – Prior belief is your starting guess; new sensory evidence nudges the belief toward the most likely world state.
Mean‑Field = Crowd Average – Each person (neuron) follows the average opinion (field) of the crowd rather than tracking every individual conversation.
---
🚩 Exceptions & Edge Cases
Dendritic Computation – Simplified point‑neuron models miss location‑dependent integration; detailed multicompartment models are required for synaptic clustering effects.
Mean‑Field Breakdown – In strongly correlated or highly synchronized networks (e.g., epileptic bursts), the average field no longer predicts dynamics.
Bayesian Optimality Limits – Real neural circuits may use heuristics that approximate Bayes but are constrained by metabolic cost or wiring.
Neuromorphic Noise – Analog variability can introduce drift; careful calibration is needed for quantitative predictions.
---
📍 When to Use Which
| Situation | Recommended Model / Tool |
|-----------|---------------------------|
| Investigating ion‑channel pharmacology or subthreshold dynamics | Full Hodgkin–Huxley or multicompartment cable model (NEURON, GENESIS) |
| Studying large‑scale network oscillations or population firing rates | Integrate‑and‑Fire or Mean‑Field models (Brian, NEST) |
| Exploring memory storage and retrieval | Hopfield network or Hebbian weight matrices |
| Modeling perception under uncertainty | Bayesian framework (probabilistic population codes) |
| Real‑time closed‑loop brain‑machine interface | Neuromorphic hardware (SpiNNaker, BrainScaleS) |
| Rapid prototyping of spiking network topology | Brian or NEST (Python‑friendly) |
| Comparing hypotheses with experimental data | Efficient coding analyses (information‑theoretic metrics) |
---
👀 Patterns to Recognize
Spike‑Timing Dependent Plasticity (STDP) – Pre before post → potentiation; post before pre → depression.
Center‑Surround Receptive Fields – A hallmark of efficient coding in early visual cortex.
Prior‑Bias Effects – Under ambiguous stimuli, responses skew toward the prior (e.g., visual “lightness” illusion).
Energy Landscape Minima – In Hopfield nets, stable memory patterns correspond to local minima of the energy function.
Excitation‑Inhibition Balance – Mean‑field models often reveal a tight balance that maintains stable firing rates.
---
🗂️ Exam Traps
Confusing Na⁺ vs. K⁺ Current Directions – Na⁺ flows inward during depolarization; K⁺ flows outward during repolarization.
Assuming Efficient Coding = Sparse Coding – Efficient coding minimizes redundancy; sparsity is one possible implementation but not the definition.
Labeling a Hopfield Network as “Feedforward” – Hopfield nets are recurrent with symmetric connections; feedforward networks lack feedback loops.
Treating Mean‑Field Solutions as Exact – They are approximations; deviations grow with strong pairwise correlations.
Believing Neuromorphic Chips are Digital GPUs – They use event‑driven, often analog computation; performance metrics differ (energy per spike, latency).
Mixing Up Integrate‑and‑Fire Threshold vs. Reset Values – Threshold triggers a spike; reset sets the post‑spike membrane potential (often \(V{reset} < V{th}\)).
---
or
Or, immediately create your own study flashcards:
Upload a PDF.
Master Study Materials.
Master Study Materials.
Start learning in seconds
Drop your PDFs here or
or