Single Net Categorisation Results
- Ten years or so ago, we used single net topologies to learn categories
(Information Retrieval and the congressional voting task).
This had the advantage of all the relevant neurons being
stimulated by the environment.
- So, each congressman was represented by a binary feature for the
way they voted on a bill, and their party.
- We represented each of these values by 10 neurons, and if they abstained
just didn't turn them on.
- Learned, and then tested new items by turning on the way they voted
on the bill.
- It all worked pretty well (93% accuracy).
- We could also represent hierarchical categories.
- The columns represent features, Cat is 0-3-4-5-8, Dog 1-3-4-6-9,
Rat 2-3-4-7-9, and Mammal 3-4-9.
- The compensatory mechanism prevented Mammal from taking over.
- The problem was that there was no way to spread beyond
neurons that were directly stimulated.
- We tried randomly stimulating neurons, but that didn't work particularly
well.