Multi-Layer Perceptrons
- Multi-Layer Perceptrons (MLPs) were a big innovation in the 80s.
- A perceptron just takes the inputs, multiplies them by weights,
passes them through a function, and sends out the output.

- So, they're computationally really simple, and they're closely
related to McCullouch Pitts neurons.
- You can use any function, though a sigmoid is common. It's a lot
closer to a discrete McCullouch Pitts neurons if you use a step function.

- You can have a layer of these, where they all have the same inputs,
but have different weights, but don't get much more power.
- If you have multiple layers (typically two), you get a multi-layer
perceptron. This can approximate to an arbitrary degree of precision
a huge range of functions.
- The cool thing is that using a back propagation of error function,
you can learn to set the weights for training data.
- If you do it right, it generalises to testing data.