Here is a brief introducion to my reserach projects. Load Balancing in HypergrpahsWe address the load balancing problem in networks by considering a hypergraph where each edge represents one unit of load which should be distributed among its endpoints. Each edge has one unit of load and an allocation is a way of distributing loads in each edge of the hypergraph to its vertices. We are interensted in analyzing properties of balanced allocations in such networks. We extend the concept of balancedness from finite hypergraphs to their local weak limit. Through this process, we define a notion of unimodularity for hypergrpahs which could be considered as an extension of unimodularity in graphs and also an illustration of objective method described by Aldous and Steele. In the special case of unimodular Galton Watson processes, we characterize the load distribution. This settles the conjecture of Hajek and is an extension of the work done by Anantharam and Salez. This is an ongoing project under supervision of Professor Anantharam. High Probability Guarantees in Repeated GamesIn a joint work with Amin Gohari and Mohammad Akbarpour, we introduce a “high probability” framework for repeated games with incomplete information. In our nonequilibrium setting, players aim to guarantee a certain payoff with high probability, rather than in expected value. We provide a high probability counterpart of the classical result of Mertens and Zamir for the zerosum repeated games. Any payoff that can be guaranteed with high probability can be guaranteed in expectation, but the reverse is not true. Hence, unlike the average payoff case where the payoff guaranteed by each player is the negative of the payoff by the other player, the two guaranteed payoffs would differ in the high probability framework. One motivation for this framework comes from information transmission systems, where it is customary to formulate problems in terms of asymptotically vanishing probability of error. An application of our results to a class of compound arbitrarily varying channels is given.
Quantum Local State TransformationLocal state transformation is the problem of transforming an arbitrary number of copies of a bipartite resource state to a bipartite target state under local operations. That is, given two bipartite states, is it possible to transform an arbitrary number of copies of one of them to one copy of the other state under local operations only? This problem is a hard one in general since we assume that the number of copies of the resource state is arbitrarily large. In this work we prove some bounds on this problem using the hypercontractivity properties of some superoperators corresponding to bipartite states. We measure hypercontractivity in terms of both the usual superoperator norms as well as completely bounded norms.
Comparison Based SearchThis work addresses the problem of finding the nearest neighbor (or one of the nearest neighbors) of a query object in a database of objects, when we can only use a comparison oracle. The comparison oracle, given two reference objects and a query object, returns the reference object most similar to the query object. The main problem we study is how to search the database for the nearest neighbor (NN) of a query, while minimizing the questions. The difficulty of this problem depends on properties of the underlying database. We show the importance of a characterization: combinatorial disorder which defines approximate triangle inequalities on ranks. We present a lower bound of average number of questions in the search phase for any randomized algorithm, which demonstrates the fundamental role of for worst case behavior. We develop a randomized scheme for NN retrieval in questions. The learning requires asking questions and bits to store.
The Role of Information in Fair Division ProblemsCutting a cake is a metaphor for the problem of dividing a resource (cake) among several agents. The problem becomes nontrivial when the agents have different valuations for different parts of the cake (i.e. one agent may like chocolate while the other may like cream). A fair division of the cake is one that takes into account the individual valuations of agents and partitions the cake based on some fairness criterion. Fair division may be accomplished in a distributed or centralized way. Due to its natural and practical appeal, it has been a subject of study in economics under the topic of “Fair Division”. To best of our knowledge the role of partial information in fair division has not been studied so far from an information theoretic perspective. In this work we study two important algorithms in fair division, namely “divide and choose” and “adjusted winner” for the case of two agents. We quantify the benefit of negotiation in the divide and choose algorithm, and its use in tricking the adjusted winner algorithm. Lastly we consider a centralized algorithm for maximizing the overall welfare of the agents under the Nash collective utility function (CUF). This corresponds to a clustering problem. Drawing a conceptual link between this problem and the portfolio selection problem in stock markets, we prove an upper bound on the increase of the Nash CUF for a clustering refinement.
