Information-based methods in dynamic learning
Seminar Room 1, Newton Institute
AbstractThe history of information/entropy in learning due to Blackwell, Renyi, Lindley and others is sketched. Using results of de Groot, with new proofs, we arrive at a general class of information functions which gives "expected" learning in the Bayes sense. It is shown how this is intimately connected with the theory of majorization: learning means a more peaked distribution in a majorization sense. Counter-examples show that in some real situations it is possible to un-learn in the sense of having a less peaked posterior than prior. This does not happen in the standard Gaussian case, but does in cases such as the Beta-mixed binomial. Applications are made to experimental design. With designs for non-linear and dynamic system an idea of "local learning" is defined, in which the above theory is applied locally. Some connection with ideas of "active learning" in the machine learning area is attempted.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.