EBTL: Effective Bayesian Transfer Learning

A project awarded under DARPA IPTO BAA 05-29, 9/1/05-8/31/08.

Investigators

UC Berkeley
Stuart Russell (PI)
Peter Bartlett
Michael Jordan

MIT
Leslie Kaelbling (Subcontract PI)
Tommi Jaakkola
Tomas Lozano-Perez

Oregon State
Alan Fern (Subcontract PI)
Tom Dietterich
Prasad Tadepalli

Stanford
Andrew Ng (Subcontract PI)
Daphne Koller
Sebastian Thrun


Project Summary

Transfer learning is what happens when someone finds it much easier to learn to play chess having already learned to play checkers; or to recognize tables having already learned to recognize chairs; or to learn Spanish having already learned Italian. Achieving significant levels of transfer learning across tasks -- that is, achieving cumulative learning -- is the central problem facing machine learning.

The EBTL project involves a technical unification of two previously disjoint areas of research: knowledge-intensive learning in the logical tradition and hierarchical Bayesian learning in the probabilistic tradition. The unification involves applying Bayesian learning methods with strong prior knowledge represented in an expressive first-order probabilistic language.

The approach applies not just to learning declarative knowledge, but also to learning decision-related quantities such as reward functions, value functions, policies, and task hierarchies. This approach allows us to use the same powerful transfer methods to generalize across task environments to other task instances with different initial states, objects, goals, and physical laws.

Our theory of transfer learning will be tested on real-time strategy games and on simulated object manipulation and perception.


Bibliography


Meetings


Courses