Kumar Krishna Agrawal

I am a graduate student at EECS, UC Berkeley. My research is motivated by the need to design fast algorithms which are statistically/computationally efficient. Broadly, I am interested in foundations of machine learning, reinforcement learning. I enjoy bringing together insights from fundamental research and algorithm design, to build sytems which work in the real-world.

Previously I was a researcher at Google Brain, part of the Google Brain Residency, after graduating from Indian Institute of Technology Kharagpur where I majored in Mathematics and Computing. In the past, I've been fortunate to work under the guidance of Prof. Yoshua Bengio, Prof. Raman Arora and Prof. B. Sury.

/ home / gscholar / notes / teaching



Kumar Krishna Agrawal
select publications

Learning from an Exploring Demonstrator: Optimal Reward Estimation for Bandits
Wenshuo Guo, Kumar Krishna Agrawal, Aditya Grover, Vidya Muthukumar, Ashwin Pananjady
in submisssion
Workshop on Theory of Reinforcement Learning, ICML 2021
Workshop on Human-AI Collaboration in Sequential Decision-Making , ICML 2021 (spotlight)

Discrete Flows: Invertible Generative Models for Discrete Data
Dustin Tran, Keyon Vafa, Kumar Krishna Agrawal, Laurent Dinh, Ben Poole
Neural Information Processing Systems (NeurIPS), 2019

GANSynth: Adversarial Neural Audio Synthesis
Jesse Engel, Kumar Krishna Agrawal, Shuo Chen, Ishaan Gulrajani, Chris Donahue, Adam Roberts
International Conference on Learning Representations (ICLR), 2019
arXiv / Magenta blog / samples

Discriminator Actor Critic: Addressing sample inefficiency and reward bias in Adversarial Imitation Learning
Ilya Kostrikov, Kumar Krishna Agrawal, Debidatta Dwibedi, Sergey Levine, Jonathan Tompson
International Conference on Learning Representations (ICLR), 2019

Towards Mixed Optimization for Reinforcement Learning with Program Synthesis
Surya Bhupatiraju*, Kumar Krishna Agrawal*, Rishabh Singh
Workshop on Neural Abstract Machines and Program Induction, ICML 2018


(This, is much more classy)