The Mule Problem
- If a neuron gets into a CA (and shouldn't be there),
when the CA is activated the bad neuron will be
activated.
- This means that the synaptic weights between it and
the CA will remain high. So it will remain in the CA.
- It's the mule problem, because if a network has a CA for
horse and a mule is presented, the horse CA will be activated.
- This is as it should be, but eventually we'd like to be able
to have a separate mule CA.
- As it is, the horse CA will just broaden to include mules.
- My new PhD student Hina Ghalib started on this problem, and
we came up with a solution pretty quickly.
- The key comes from making the neurons that are not externally
stimulated fire at a lower rate than the externally stimulated
neurons.
- One way to do this is to increase the decay, so that less
activation is retained.
- Unfortunately, this reduces persistence.
- The solution we settled on was to modify the fatigue
mechanism.
- Our earlier model increased the fatigue by a constant each
time the neuron fired, and decreased it by a different constant
when it didn't fire.
- The firing threshold was increased by the fatigue, making
fatigued neurons more difficult to fire.
- The new mechanism decreases the fatigue proportionally to
the existing fatigue.
- We put this in NeuroComputation and Psychology Workshop 2004.