Processing with fLIF Neurons
- You'll note that CAs are all about stable states.
- This is great for attractor nets (e.g. Hopfield), but it's
lousy for cognition.
- That is, you have to be able to move onto new states.
- We thought we could do this, but have now actually done it.
- Sequence is done by having CA1 excite CA2, and CA2 inhibit CA1.
- We've got a little bit of code that makes this all flexible on
decay rate, so that you can get a range of processing speeds.
- Conditional changes of state can be done by having CA3 only come on
when CA1 and
CA2 comes on. Lets do neuron to neuron connectivity with
reverse inhibition. D=2 and theta=4. If the forward connections are
1.5 then CA2 fires up iff CA1 and CA2 are up.
- We did this with "CAs" of 6 neurons, and showed that all FSAs could
be built with fLIF neurons.
- These things are Turing complete. (I implemented a stack, so
could implement 2 stacks, + FSA makes Turing complete.)
- That's not too surprising. The continuous version is
super-Turing complete (Siegelman I don't understand it.)
- So, you can process with fLIF neurons. The question is now
how do you learn to process?