The INI has a new website!

This is a legacy webpage. Please visit the new site to ensure you are seeing up to date information.

Skip to content

SCH

Seminar

Optimal prediction from relevant components

Helland, IS (Oslo)
Friday 27 June 2008, 09:00-09:20

Seminar Room 1, Newton Institute

Abstract

In Helland (1990) the partial least squares regression model was formulated in terms of an algorithm on the parameters of the model. A version of this parametric algorithm has recently been used by several authors in connection to determining the central subspace and the central mean subspace of sufficient model reduction, as a method where matrix inversion is avoided. A crucial feature of the parametric PLS model is that the algorithm stops after m steps, where m is the number of relevant components. The corresponding sample algorithm will not usually stop after m steps, implying the the ordinary PLS estimates fall outside the parameter space, and thus cannot be maximally efficient.

We approach this problem using group theory. The X-covariance matrix is endowed with a rotation group, and in addition the regression coefficients upon the X-principal components are endowed with scale groups. This gives a transitive group on each subspace corresponding to m relevant components; more precisely, these subspaces give the orbits of the group. The ordinary PLS predictor is equivariant under this group. It is a known fact that in such situations the best equivariant estimator is equal to the Bayes estimator when the prior is taken as the invariant measure of the group. This Bayes estimator is found by a MCMC method, and is verified to be better than the ordinary PLS predictor.

Presentation

[pdf ]

Audio

MP3MP3

Video

The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.

Back to top ∧