I am a Ph.D. Student in the Computer Science Department at University of Southern California (USC). I work with Prof. Yan Liu in the Melady Lab . My work spans both: (a) learning to predict, and (b) learnable control in multi-agent settings featuring interactions amongst the various agents. Learning to predict effectively in multi-agent settings requires incorporating inductive biases from physics and graph-based models, while learning to control often utilizes tools from game theory and Nash equilibria in order to design deep architectures which can be trained to achieve optimal prediction performance or control policies in multi-agent settings. Working in these diverse domains has provided me with a strong fundamental background in deep learning, reinforcement learning, game theory and graph-based learning. I also work in collaboration with Prof. Milind Tambe (Teamcore group, Harvard University) and Prof. Fei Fang (CMU) on finding optimal strategies for Stackelberg Security Games with Deep Reinforcement Learning. Previously, I have also worked with Prof. Nora Ayanian in the ACT Lab at USC. Here, my work was on multirobot coordination and on developing planning algorithms for efficient resource delivery.
Before this, I attended Indian Institute of Technology, Delhi where I got my undergraduate degree in Electrical Engineering. My primary focus was on Control Theory and Signal Processing and I was advised by Prof. Shouribrata Chatterjee . I was also the Technical Secretary of Electrical Engineering Society at IIT Delhi (Aug 2012 - Aug 2013) and served as the General Secretary of the Electronics Club (August 2013 - May 2014) during my final year.
I am originally from New Delhi, India. I started my Ph.D. in Spring 2015 and currently live in Los Angeles, California.
My broad interest lies in understanding "understanding" itself. As a broad and ambitious goal, I want to figure out how the human mind works and potentially develop architectures and algorithms which would allow artificial agents to achieve at least the same level of understanding as humans one day. Consequently, I work on reinforcement learning and deep learning to design agents capable of autonomous planning and learning in multi-agent settings. My research interests broadly span deep reinforcement learning, continual learning, multi-agent learning, language understanding and robotics.