Wednesday, March 10, 2010

Useful mathy stuff?

I ran across this bit in Wikipedia just now, and it looks like it might be useful for the dimensional-extrapolation thing I was speculating about recently (this post is more for my benefit than yours).

"In machine learning, the kernel trick is a method for using a linear classifier algorithm to solve a non-linear problem by mapping the original non-linear observations into a higher-dimensional space, where the linear classifier is subsequently used; this makes a linear classification in the new space equivalent to non-linear classification in the original space.

This is done using Mercer's theorem, which states that any continuous, symmetric, positive semi-definite kernel function K(x, y) can be expressed as a dot product in a high-dimensional space."

No comments: