CA Persistence
- One of the curious things about simulated CAs is their persistence.
- Ignited CAs represent psychological concepts in working memory.
- There is a lot of psychological exploration of how long STMs persist.
- However, simulated CAs either persist forever, or they die out quite
quickly.
- What we'd like is a system that learned CAs, that persisted like
concepts did.
- This is one of the questions I'd like to work on, and more recently
I've used small world topologies to kind of get this to work
(unpublished).
- Note that the CABot1 parser used a stack and that didn't work for
timing.
- I followed Rick Lewis, and used an activation based parser.
- So, I needed roughly correct persistence for the parser,
as that was needed for order.
- So, I programmed a neural circuit that once active just decayed
at a regular rate.
- It's a neural solution, but it seems pretty unlikely that the
brain works like that.