MIT
Leslie Kaelbling (Subcontract PI)
Tommi Jaakkola
Tomas Lozano-Perez
Oregon State
Alan Fern (Subcontract PI)
Tom Dietterich
Prasad Tadepalli
Stanford
Andrew Ng (Subcontract PI)
Daphne Koller
Sebastian Thrun
The EBTL project involves a technical unification of two previously disjoint areas of research: knowledge-intensive learning in the logical tradition and hierarchical Bayesian learning in the probabilistic tradition. The unification involves applying Bayesian learning methods with strong prior knowledge represented in an expressive first-order probabilistic language.
The approach applies not just to learning declarative knowledge, but also to learning decision-related quantities such as reward functions, value functions, policies, and task hierarchies. This approach allows us to use the same powerful transfer methods to generalize across task environments to other task instances with different initial states, objects, goals, and physical laws.
Our theory of transfer learning will be tested on real-time strategy games and on simulated object manipulation and perception.