Cell Assemblies (CAs) are sets of neurons that have high mutual synaptic
strength; it is a recurrent network that can remain active even after
stimulus from outside the CA ceases. CAs can account for a wide range
of psychological data, and thus of human intelligence. The computational
theory behind CAs is extremely complex and not well understood. The power of
CAs as a computational device is great, but the research community in
general does not understand how to use this power to model cognitive
phenomena, or to solve real world problems such as data mining. We
are working on CAs with the purpose of understanding there
computational power, and to use that power to solve problems.
The CANT model
The research community has a great deal of knowledge about neural function
and brain topology. This knowledge is by no means complete, but much of
it is quite solid and is generally accepted as true. We try to take advantage
of this knowledge as the basis of our models. For example, it is known
that the brain is made up of neurons and these neurons connect to other
neurons at places called synapses. (They may connect at other places, but
most connections are synaptic.) When a pre-synaptic neuron fires, it sends
activation (or inhibition) across the synapse, to the post-synaptic neuron.
If the post-synaptic neuron collects enough energy it will fire.
We have developed a computational version of our model called CANT in Java
and is available on this site. The current version of the model is based
on these neuro-physiological principles:
Neurons are leaky integrators
Neurons fatigue
All synapses from a neuron are either inhibitory or excitatory
(Dale's Law)
All learning is local (Hebbian). That is, synaptic change is
based only on properties of the pre and post-synaptic neurons.
Neurons are sparsely connected and these connections are usually
distance-biased. (A neuron is likely to have a synapse to a particular
nearby neuron, but not a particular distant one.)
We also endeavour to make this model computationally efficient so that we
can model large numbers of neurons. We consciously have decided to use
discrete time instead of continuous time. This hides many of the subtle
timing behaviours that may be useful for, for example, sychronized neural
firing. The time that we are using is currently 10 milisecond cycles; this is
based on the usual refractory period of a neuron. It is rare for a neuron
to fire more than once in 10 miliseconds. This enables us to simulate
thousands of neurons in real time on relatively inexpensive hardware.
Categorisation
To a large extent, a CA acts categorises input. The network is presented
with input from the environment leading to neural activation. Neurons spread
activation to other neurons and may ignite one or several CAs. The CAs that
are activated categorise the input as an instance of the concept that the
CA represents.
(As a digression, we have thought of CAs as the basis for concepts. This
simplifies thinking about CAs and writing about them. When a dog is
presented to a network, the dog CA is activated. This need not be the
case and CAs may also be associated with unnamed and subconscious phenomena
such as simple verticle fields, or complex cognitive maps. There is clearly a
wide range of CAs, and we have not begun to categorise them.)
We have done a range of work on the use of CAs for categorisation. This has
included simple categories of neural inputs where each neuron is in only
one category. More recently we have looked at having neurons participate
in multiple categories.
This work has been productive and we are currently looking at extending the
range of categories that can be learned and in linking the input to real
environments. We could expand the range of patterns by supporting hierarchical
categories (see below) or expanding the time of development to allow
CAs to break into subCAs (fractionation), and to recruit new neurons.
We could link the CANT system to real environments by making sensory systems
(e.g. vision), or by linking it to data for datamining applications.
Theoretical Underpinnings
To get more mileage from our research on CAs, we really need to understand
what they do. The computational properties of highly recurrent neural networks
are quite complex. CAs also have dual dynamics: the short term dynamics of
CA ignition; and the long term dynamics of CA formation and evolution via
Hebbian learning. We feel that these two systems interact to develop powerful
computational devices. However, we need to develop a greater understanding
of the properties to exploit the system more fully.
CAs are quite close to stable states. If fatigue is ignored, a CA will
ignite and then stay on forever. This means that a stable group of neurons
will stay on, or cyclically activate. So, a few neurons in the CA are
activated by something external from the CA; these activate other neurons
in the CA; eventually all of the neurons in the CA are on.
Hebbian learning implies that the connection strength between two neurons
is based on how frequently they fire simultaneously. We worked out a rule
that enables the synaptic weight to roughly approximate the percentage of
time a postsynaptic neuron fires when the presynaptic neuron fires. However,
the synapse is more than a correlator, it also enables the presynaptic
neuron to influence the firing of the postsynaptic neuron via spread of
activation. Further more, mere correlation only allows a relative small
range of patterns to be recongised because only a small range of patterns
will have enough correlation between neurons to form synapses that are
strong enough to support recurrent actiavtion. We have worked with
a compensatory learning mechanism to compensate for this.
The compensatory learning rule modifies the correlation based on the
total synaptic strength of the pre and postsynaptic neurons. So a neuron
that already has a lot of synaptic strength will not gain as much from
a particular learning episode while one with little total synaptic strength
will gain more. This encourages recruitment of slightly used neurons into
CAs, and supports neurons to participate in multiple CAs without the CAs
combining.
The mathematics of CAs are quite complex; however the CANT network
is quite similar to other simpler systems such as Hopfield Networks.
We are actively using other models to help us understand our own.
We have recently solved one of our long term problems by starting with
a Hopfield net. We wanted to construct (not learn) a network
with 10 CAs with each neuron participating in (at least) 2 CAs.
We started with a Hopfield net the did this, then modified it in a
stepwise fashion to construct CAs that did this. The next step is
to enable the CAs to be learned.
We have also been inspired by the brain to look for computational
reasons. In the brain, neurons spontaneous activate; with no external
output a neuron will just fire. Along with a compensatory learning
mechanism, this allows neurons that participate in no CAs and are
never activated externally to be recruited into CAs.
Hierarchy
CAs are the basis of concepts but the same network should be able to
hold CAs and relationships between CAs. One type of relationship that
we are are currently looking into is the use of CAs as elements in a
Hierarchy.
A CA is a pseudo-stable state of active neurons. Related CAs will
contain some of the same neurons. Using a Hopfield net as a basis, we
have constructed a network that has several CAs. There are 4 major
category CAs, and 6 property CAs. Three category CAs are
subcategories, and one is the super category. The supercategory has
two properties associated with it, each subcategory has one, and one
subcategory has a property that overrides the supercategories value.
When a subcategory is presented the supercategory is activated, along
with all the relevant properties. In the case where the property is
overriden, the default value is not on. Thus, a hierarchy has been
constructed that allows default values to be inherited, and overriden.
This work is in its infancy. The next step is to see if this network
can be learned. This will be followed by a range of simulations that
use larger hierarchies.
Psychological Modelling
We are interested in using CAs for psychological modelling. One of
the major reasons is that there are a wide range of connectionist
modelling techniques and there is little means of comparing them. The
generality of CAs should enable a wide range of tasks to be modeled.
Moreover, these tasks can be modeled using a biolgically plausible
system.
We are currently looking into describing priming phenomena. Our hypothesis
is that an activated CA spreads activation to related CAs. This activation
allows that CA to ignite more easily and more rapidly.
Cognitive Architecture
Eventually, we would like to use CAs as the basis of a cognitive architecture.
The long term goal of this work is to develop complex AI systems. CAs can
solve the symbol grounding problem, but that does not make them a Turing
complete system. One means of making them Turing complete is to enable
them to form complex structures and to implement a rule based system.
Complex structures like semantic nets are the problem we are currently
working on. We are confident that these structures can be implemented.
We are optimistic that a rule based system can also be developed. The main
difficulty in developing such a system is solving the variable binding
problem. It is not entirely clear that we can solve this problem, but
may be able to do so via activation which will manifest itself as synchronous
firing. Once this is solved rules can be formed, since rules merely
bind concrete variables.
There is clearly a long way to go with CA work. However, progress is
being made, and this progress solves real world problems. We see no
reason why the ultimate goal of a cognitive architecture and a more
powerful AI system can not be achieved.