CAs are relevant to all of the Areas
They address each area or have interesting questions relating to them
- Robustness: like most ANN architectures, CAs are robust.
Loss of neurons might even strengthen a given CA.
- Modular Construction: CAs span brain areas, yet limit activation.
They allow cross modular cooperation, yet intra-modular specialisation.
A working CA model could explore lots of questions here.
- Learning in Context: a primary reason behind CAs is they are
unsupervised. All learning is in context.
- Synchronisation: unlike most neural models, timing is essential for
CAs, especially when they work together (eg. variable binding,
sequence). Again, a working basic CA model would have lots of
questions about synchronisation.
- Timing: CAs are a temporal model. Each cycle is roughly equivalent to
10 ms. Unlike, for instance Back Prop models, CAs
actually have things to say about timing.
- Processing Speed: I'll focus here.