Our fascination with human intelligence has historically influenced AI research to directly build autonomous agents that can solve intellectually challenging problems such as chess and GO. The same philosophy of direct optimization has percolated in the design of systems for image/speech recognition or language translation. But, AI systems of today are brittle and very different from humans in the way they solve problems as evidenced by their severely limited ability to adapt or generalize. Evolution took a very long time to evolve the necessary sensorimotor skills of an ape (approx. 3.5 billion years) and relatively very short amount of time to develop apes into present-day humans (approx. 18 million years) that can reason and make use of language. There is probably a lesson to be learned here: by the time organisms with simple sensorimotor skills evolved, they possibly also developed the necessary apparatus that could easily support more complex forms of intelligence later on. In other words, by spending a long time solving simple problems, evolution prepared agents for more complex problems. I believe that it is the same principle at play, wherein humans rely on what they already to know to find solutions to new challenges. The principle of incrementally increasing complexity as evidenced in evolution, child development and the way humans learn, might, therefore, be vital to building human-like intelligence.

My research philosophy is to start with simple but complete sensorimotor systems deployed in real world and let them gradually increase their complexity by experimenting and exploring their environment. The exploration of the agent will inevitably be limited by its hardware, guided by the richness of the environment, biased by the presence of other agents such as humans and the prior knowledge it bootstraps itself from. Eventually we would want the agent to be capable of performing goal directed tasks. All these considerations have guided my research. I believe that answers to intelligence will not be found by agents in lab environments or simulation, but by robots that explore and conduct experiments in the real world.

My research takes a holistic approach to understanding intelligence and spans robotics, computer vision, deep learning, reinforcement learning, neuroscience and cognitive science. I personally like the works of Bruno Olshausen, Jack Gallant, Alison Gopnik, Alexei Efros, Jitendra Malik, Geoff Hinton, Yann Le Cunn, Josh Tenenbaum, Ken Goldberg, Pieter Abbeel and Abhinav Gupta. Their research provides diverse and unique insights into understanding and building intelligent systems.