Symbolic semantic nets (from Quillian 67) are involved in just about
every type of knowledge representation language.
Neurally, they have also been modelled for a long-time.
The Willshaw model (1969), Kohonen's Linear Associator (1977),
and even modified Hopfield nets can be used to associate one
concept with another.
This shows that simulated neurons (or connectionist systems) can be
used to implement associations.
Levy and Horn (1999) provide some evidence that associative
memories are best stored in multiple modules.
Unfortunately, all of these systems use unrealistic topologies
with a large degree of well-connectedness (each neuron is connected
to every other one).
Well connectedness gives some nice mathematical properties but neurons
in the brain (and even in cortical columns) are not well-connected.
We're looking for how the brain stores associative memories, for
associative CAs.