More mileage from Hopfield Nets
- I'm really happy with Hopfield Nets to solve this particular problem
- In general Hopfield Nets should help us think about how overlapping CAs
can be formed
- However the Hopfield Net seems much more productive than this
- Aside from finishing this work (not an inconsiderable task), can
we get something else out of Hopfield Nets?
- Our initial hypothesis was that the Energy calculation could
help us
- Unfortunately, we haven't gotten to a place where
stable states attract similar states (Hamming Distance)
- Further exploration (and reading) may lead us to something here
- We might build certain nets for associative memory
- We might be able to build certain nets for hierarchies
- We might be able to build certain nets for datamining
- We might be able to build certain nets for sequences
- This is trickier because we have to leave one stable state and go
to another
- We might be able to integrate fatigue more effectively so that
one CA fatigues out, but activates another
- This also may realate to Pattern Generators
- In general, it might be useful to derive stable nets from Hopfield
originals
- Then use these as a basis to find training patterns, modify the
learning algorithm, or modify the topology.
- We might even be able to derive a learning rule from Hopfield
learning rules (like Widrow-Hoff)
- We've used synchronous CAs and Hopfield Nets: do asynchronous
CAs behave differntly, and do Hopfield Nets
inform this?
- Do you have any ideas?