The INI has a new website!

This is a legacy webpage. Please visit the new site to ensure you are seeing up to date information.

Skip to content



Approximation methods in statistical learning theory

Belabbas, A (Harvard)
Tuesday 05 February 2008, 11:00-12:00

Seminar Room 2, Newton Institute Gatehouse


Spectral methods are of fundamental importance in statistical learning, as they underlie algorithms from classical principal components analysis to more recent approaches that exploit manifold structure. In most cases, the core technical problem can be reduced to computing a low-rank approximation to a positive-definite kernel. Using traditional methods, such an approximation can be obtained with computational complexity that scales as the cube of the number of training examples. For the growing number of applications dealing with very large or high-dimensional data sets, however, these techniques are too costly. A known alternative is the Nystrom extension from finite element methods. While its application to machine learning has previously been suggested in the literature, we introduce here what is, to the best of our knowledge, the first randomized algorithm of this type to yield a relative approximation error bound. Our results follow from a new class of algorithms for the approximation of matrix products, which reveal connections between classical linear algebraic quantities such as Schur complements and techniques from theoretical computer science such as the notion of volume sampling.


[pdf ]

Back to top ∧