I am interested in developing analytical and computational tools to safely deploy robotic and AI systems in the physical world. My goal is to ensure that autonomous systems such as self-driving cars, delivery drones, or home robots can operate and learn in the open while satisfying safety constraints at all times. This requires a rapprochement between the powerful data-driven methods in machine learning and the strong analytic foundations of control theory. Further, these systems will often need to reason about people's beliefs and intentions, acknowledging and leveraging human-machine interaction to guarantee safety.
Specifically, my research focuses on the following areas:
Bio in a nutshell
Before coming to Berkeley I got my B.S./M.S. Electrical Engineering degree at the Universidad Politécnica de Madrid in Spain and a Masters in Aeronautics at Cranfield University in the UK. I also spent one year working in UAV system design at Aerialtronics. I was then awarded the La Caixa Foundation fellowship to pursue a graduate degree in the United States. At the midpoint of my PhD, I spent 6 months doing R&D work at Apple.
Here are the main research projects comprising my PhD research.
Click on the images to learn more.
Safe Learning for Robotics
Safety in Multi-Agent Systems
The fast advances in learning-based control techniques such as deep reinforcement learning are opening exciting opportunities for robots to teach themselves to become better and smarter by interacting with their environments. However, unlike simulated agents, robots often cannot afford the luxury of extensive trial and error, since certain "errors" (say, a collision) can be unacceptably costly. In other words, robots are safety-critical systems.
In my work, I have looked into principled ways to augment robotic learning with a safety-preserving controller that builds an increasingly reliable predictive model of the robot and its environment and uses it to determine what actions may result in unacceptable outcomes. This controller can be wrapped around any learning algorithm, giving it as much freedom to explore as possible, but overriding it when safety is on the line or uncertainty becomes large due to unexpected environment behavior.
A central challenge in many fields of decision theory (including control theory and artificial intelligence) is analyzing and designing complex large-scale systems with many interconnected elements or agents; applications range from energy distribution networks and air traffic management to systems biology and cancer research. The combinatorial explosion in the complexity of these systems makes their study intractable in many cases; in others, however, structure can be either found or imposed to allow properties and guarantees to be computed.
With the concrete application of unmanned aircraft traffic management (UTM) in mind, I have worked on a number of theoretical tools and algorithmic approaches to compute tractable strategies for large-scale cooperative multi-agent systems. In particular, autonomous aerial vehicles can travel along automatically-emerging air highways forming platoons, as well as determining near-optimal collision-free trajectories to their destination by sequentially reserving space-time regions. Part of this work is in collaboration with NASA's UTM project.
The role of robotics has already begun its expansion from industrial production into the service sector. As robots step out of factory floors and into our homes, workplaces, cities, and roads, performance and safety will no longer be one-sided problems, but will heavily depend on how these machines interact with people. Human-automation systems will become increasingly ubiquitous and complex over the next few decades, and understanding them will be one of the keys to technological and social progress in the 21st Century.
In recent years, cognitive science has advanced our understanding of human inference and decision making through the introduction of mathematical models with remarkable predictive power. Still, these models inevitably make simplifying assumptions, which can lead to inaccurate predictions at critical times. To make robots more resilient to model limitations, I have explored the notion of confidence-aware predictions, which quickly adapt to become more conservative when the human's behavior departs from the model's structure.