Deep Belief Nets
- As you may now, the deep belief net is super hot.
- They seem to be using it for all kinds of machine learning
tasks, for example speech recognition.
- There are some differences between these nets and ours.
- Boltzmann machines aren't FLIF neurons (but they're pretty
close).
- The training algorithm is different.
- They train a level at a time.
- Still, it seems this is a good way forward.
- Also, instead of training for one task, we might be able to
train for a wide range of tasks.
- In essence, this is a large component of the problem. How
do you train a whole brain to learn semantics?