Learning
- The system learns by changing the weights of the synapses.
- Hebbian learning. If one neuron causes another to fire frequently,
its weight is increased.
- That's the way learning is really done.
- It's entirely local.
- The current popular algorithm is spike time dependent plasticity (STDP)
(Bi and Poo). If the presynaptic neuron fires right before the post-
synaptic neuron, the weight is increased; if this is reversed, the
weight is decreased.
- We've also done stuff with the total weight leaving or entering a
neuron. This compensatory learning rule keeps the totals in
a target area.
- (You can also learn by adding new synapses, or removing them, or
by adding or removing new neurons with synapses. This isn't
done very often.)
- Synaptic growth and death
- Neural growth and death