Natural Kinds
- All of those things worked with nets where the neurons were
activated by the environment.
- If we're really going to get neurons to learn the way we do,
the neurons have to fire on their own.
- We changed our model so that hypo-fatigued neurons fired.
- We could then move into neurons that weren't externally stimulated.
- We could then use topologies like:

- We used one like this and one with four subnets to categorise
Irises, Cars and Yeast, again from the UCI database.
- We presented input neurons and the correct category during
training.
- We measured either by counting the spikes in the answer neurons
or comparing training nets firing behaviours to test nets using
Pearson measurements.
- Pearson did better, but both had reasonable performance compared
to standard algorithms.
- Note, all of these tasks are natural kinds. They don't have
boolean operations.
- The xor problems still pop up. We can do it, but we have to throw
a lot of neurons at the problem, and kind of know it's coming.