Results on Theory
- There is not a well founded theory on how CAs work, though
attractor nets are a good start.
- We're hoping to understand the theory of CAs to help us
solve our current problems and eventually go on to solve
a host of problems
- One area that seems wide open is the learning rule. We're
wed to Hebbian learning, but there are a range of
possible rules. We've done work on compensatory
learning which leads to a range of good behavior like
CAs spreading into unstimulated areas, and improved
attractor dynamics.
- We also have a proof that the capacity is greater than the
standard proof for attractor nets. We're working on
a manuscript now.
- We've recently solved a problem of CA fractionation that
we call the Mule Problem. We've done this by changing our
neural fatigue model. We've submitted it to NCPW but will
probably do a better experiment and submit it to Neural Computation.