Collaborative Intelligent Agents

Existing analyses for coordination of automated agents -- either to share a resource or accomplish a task together -- assume that the agents are commonly designed or share some common, if decentralized protocol. The problem of how intelligent, automated agents should collaborate is given a new twist if they are separately designed and cannot communicate apriori. My interest in these problems was motivated by the problem of dynamic sharing of spectrum, which I view as an economic resource. I am primarily interested in the fundamental limits of collaborative intelligence through the lens of game theory and learning.

Leader-follower games versus simultaneous games

Joint work with Anant Sahai

The Stackelberg leader-follower game, in which some leader agent commits to a strategy or move first, is a commonly used solution concept for non-cooperative game theoretic models in engineering settings. The Stackelberg game is often socially advantageous over simultaneous play. We investigate the benefits of Stackelberg play over simultaneous play in the context of regulatory spectrum games, and show that the leader agent (primary user) enjoys a significant first player advantage. We introduce a novel information-theoretic concept of partial commitment by which the leader agent only commits to a partial range of strategies. We show that the appropriately defined partial commitment solution concept can be thought of as a continuum between the Stackelberg equilibrium and simultaneous equilibrium.

This work was submitted to IEEE International Symposium on Information Theory, Aachen, Germany, 2017. In future work, we want to leverage the social advantage of the leader-follower game and understand it as a method of implicit collaboration and learning.

Ex-post enforcement for dynamic spectrum sharing

Joint work with Anant Sahai


The current paradigm of ex-ante enforcement is easy to implement, but defines conservative exlusion zones for incumbent users of spectrum. This leads to inefficient use of spectrum and circumvents the crucial problem of spectrum sharing on a temporal basis. The case for an ex-post enforcement model, in which unlicensed users of spectrum are deterred from causing harm through a "spectrum jail" system has been argued in previous work. We delineate rights that can be guaranteed to a primary user of spectrum in terms of retained performance and show that compatible secondary users can use spectrum productively.

This work will be presented at IEEE Symposium on Dynamic Spectrum Access Networks, Baltimore, 2017. Our enforcement model builds on the traditional ex-post enforcement frameworks explored in economic analysis of criminal law, and we believe that our model is more broadly applicable to enforcement in automated, artificially intelligent systems.

Information-theoretic limits on regulatory systems

Joint work with Anant Sahai


Our analysis of ex-post enforcement systems shows that secondaries require a reliable identity system that ensures their conviction when they cause harm, and avoids false convictions. This problem can be formulated as a degraded broadcast channel with reliable transmission to the nearer receiver and reliable identification at the further receiver.

We aim to study the information-theoretic limits on this problem considering finite block length messages, and establish a tradeoff between the secondary's gap to transmission capacity and identification error rate.

TV Whitespace in Canada and Australia

Joint work with Kate Harrison and Anant Sahai


My previous work focused on data-driven, quantitative analyses of existing unlicensed spectrum. The TV whitespaces, the incarnation-du-jour of dynamic spectrum access, are gaining attention worldwide. Until now, there has been no easy way to carry out data-driven calculations to estimate the whitespace opportunity under different spectrum allocation scenarios -- previously existing tools were closed source and not flexible enough to allow this.

Whitespace Evaluation SofTware(WEST), a modular and powerful open-source software developed and designed by Kate Harrison, former graduate student in EECS at UC Berkeley, supports whitespace availability calculations under highly versatile conditions. Use of the TV whitespaces was legalized in Canada at the beginning of 2015. Weeks after this development, we were able to use WEST to obtain novel results for whitespace availability in Canada and Australia under the FCC's and Industry Canada's regulatory sets for whitespace devices. We were also able to compare the older FCC ruleset with the newer Industry Canada ruleset in the USA, Canada, and Australia and study their differences in detail.

This work has been published and was presented at IEEE Symposium on Dynamic Spectrum Access Networks, Stockholm, 2015.

Effect of incentive auction on TV Whitespaces

Joint work with Angel Daruna, Vijay Kamble, Kate Harrison, and Anant Sahai


Privatization of spectrum allocation through auctions has been long advocated for to increase the efficiency of spectrum as an economic resource. The most recent manifestation of this has been the FCC's ongoing incentive auction that aims to clear up to 144 MHz of TV spectrum for LTE usage.

We analyzed optimally efficient re-allocations of TV stations for various spectrum clearing targets using satisfiability solvers, and the effect of such desirable auction outcomes on TV as well as TV whitespaces. This work has been published and was presented at IEEE International Conference on Communications, 2015.

I'm also a member of the UC Berkeley team for the upcoming DARPA Spectrum Collaboration Challenge, which aims to facilitate coordination and collaboration between separately designed systems to access spectrum productively.

Stochastic Decoding of LDPC Codes: Undergraduate Senior Thesis Project

Joint work with Andrew Thangaraj

Low-density-parity-check codes are important in applications requiring reliable and highly efficient information transfer. A communication system using LDPC codes consists of an encoder and decoder, and decoding is performed using the message-passing algorithm. We studied the performance of a stochastic decoder for LDPC codes whose computational complexity is much lower. We also developed density evolution equations for the stochastic decoder. We implemented modified versions of the stochastic decoder to optimally performing short LDPC codes over GF(q) and interpreted the results.

Constructing Low Coherence Matrices Using SL2(q)

Joint work with Matthew Thill and Babak Hassibi

Matrices with low coherence are important in compressed sensing, sphere decoding and quantum measurements among other applications. Good constructions have been developed for random matrices. We constructed deterministic low coherence matrices using concepts from group theory and representation theory. In particular, we looked at Abelian groups and the special linear group SL2(q).

This work has been published and was presented at IEEE International Conference on Acoustics, Speech, and Signal Processing, Florence, 2014.