Move to a Neural Model
- The Hopfield net is well connected and the synapses
are symmetrical
- The brain has say a trillion neurons, and each has around
1000 synapses to and from it, and connections are not symmetrical.
- Extensions of the theory of attractor nets show that it is
quite robust, but they do differ significantly from our
CA model. For instance, due to fatigue it's easy for a CA net
to move to a new psuedo-stable state.
- So, I think our CA model is not a typical attractor net.
- We have discrete steps, neurons are fatiguing leaky integrators,
neurons obey Dale's Law, and learning is done by Hebbian learning.
- Unlike Hopfield nets, stable states are a series of neurons that
tend to be active.
- It's not clear how long stable states persist, but in the
absence of external stimulation, ours tend to last indefinitely.
- Currently, our simulations are about categorising input.
- A stimuli is presented, and the net runs to a pseudo-stable
state.
- Depending on the state that it ends up in, the stimuli is categorised.