Learning in Hot Coffee
- In particular, the big problem is learning.
- We whacked some learning into the hot coffee experiment by presenting
triples with no associated synapses.
- We collected firing data and did the learning off Nest.
- That is, we called out to python to get calculate new weights.
- They were compensatory Hebbian learning rules.
- The initial static weights took a long time
to converge for big ranges. It was 13920ms for the largest, hot
coke.
- The learned ones however all converged within 300 ms.
- That's a psychologically realistic time.
- I think the variance allows the edge conditions to
leave more quickly.