Evidence for these things
- Neurons are a great place to start modelling AI because your
model can be tied to real biological neural behaviour.
- Evidence for fLIF neurons:
- It seems pretty clear that neurons fire.
- Integration is also quite clear.
- Leak is a little less clear, but there seems to be
widespread evidence.
- Fatigue also has some evidence but is less clear.
- The nice things with all of this is that you can
take a neuron out of a body, send spikes to it, and
measure it.
- Clearly, fLIF is a simplification, but it should be
evaluated on the quality of fit to neural data
and the efficiency of simulation.
- Evidence for CAs:
- Abeles on firing patterns in behaving monkeys is
good evidence. Eckhorn et al 1988.
- Gelbard-Sagiv et al 2008
- High firing (fMRI) in different brain areas for words
(Pulvermuller 1999).
- This is a lot harder to find really solid evidence for because
you can't see lots of behaving neurons.
- Evidence for Hebbian Learning:
- This is also nice because you only need two neurons to
test it.
- However, the effects may be quite long term.
- LTP: Marr 1969; Hubel and Wiesel 1965; Bliss and Lomo 1973 etc.
- STP: Hempel et al 2000; Buonomano 1999.
- There is also an emergent level of evidence.
- That is, systems based on fLIF neurons, Hebbian learning and CAs
are meant to behave like people do.
- So, psychological behaviour is another bit of evidence that
should be used.