Conclusion
- This lecture has used two dimensions for support vectors. This all
works in higher dimensions.
- Take Home Points
- Learning algorithms can be parametric or non-parametric.
- Support Vector Machines make use of support vectors
to generate maximum margin separators.
- If you can't find good linear separators, you can use
the kernel trick to project to higher dimensions where
you can find a linear separator.
- SVMs are a framework using linear separators and kernels.
- Reading: Read Russell and Norvig's Learning from Examples Chapter
section 9 (pp. 755-758) for this week.
- Reading: for next week read the
Deep
Learning Wiki