Program

Location: The workshop takes place at Toyotal Technological Institute on August 20-22, 2018.

  • 9:30 - 9:50
    Aug 20
    Breakfast: Provided by TTIC
  • 9:50 - 10:00
    Aug 20
    Welcome
  • 10:00 - 10:45
    Aug 20
    Talk by Ariel Procaccia: Automating Ethical Decisions Using Social Choice and Machine Learning
    I present a new approach to automated decision making, which I've been referring to as virtual democracy. In a nutshell, the idea is to collect people's preferences on dilemmas in a specific domain, learn models of those preferences, and then, at runtime, aggregate the predicted preferences on the dilemma at hand to reach a socially desirable decision. This approach allows us to decide thorny dilemmas that were previously beyond the reach of AI. I illustrate the approach in two domains: 1. Autonomous vehicles: The goal is to decide trolley-problem-type ethical dilemmas. I describe a system that automates these decisions, using preference data collected from 1.3 million people through the Moral Machine website. 2. Food banks: This is ongoing work with a food bank in Pittsburgh, with the goal of helping them decide how to allocate incoming food donations to recipient organizations.
  • 10:45 - 11:30
    Aug 20
    Talk by Annie Liang: Overabundant Information and Learning Traps
    We develop a model of social learning from overabundant information: Short-lived agents sequentially choose from a large set of (flexibly correlated) information sources for prediction of an unknown state. Signal realizations are public. We demonstrate two starkly different long-run outcomes: (1) efficient information aggregation, where the community eventually learns as fast as possible; (2) "learning traps," where the community gets stuck observing suboptimal sources and learns inefficiently. Our main results identify a simple property of the signal correlation structure that separates these outcomes. In both regimes, we characterize which sources are observed in the long run and how often.
  • 11:30 - 12:00
    Aug 20
    Talk by Jake Abernethy: Building Algorithms by Playing Games
    A very popular trick for solving certain types of optimization problems is this: write your objective as the solution of a two-player zero-sum game, endow both players with an appropriate learning algorithm, watch how the opponents compete, and extract an (approximate) solution from the actions/decisions taken by the players throughout the process. This approach is very generic and provides a natural template to produce new and interesting algorithms. I will describe this framework and show how it applies in several scenarios, and describe recent work that draws a connection to the Frank-Wolfe algorithm and Nesterov's Accelerated Gradient Descent.
  • 12:00 - 1:45
    Aug 20
    Lunch: On your own, see some local options.
  • 1:45 - 2:30
    Aug 20
    Talk by Nina Balcan: Sample Complexity of Multi-item Profit Maximization
    We study profit maximization in the setting where the mechanism designer has samples from the distribution over buyers' values, not an explicit description thereof. We uncover a general structural property that we call linear delineability shared by a myriad of pricing and auction mechanisms classes that allows us to prove strong sample complexity or generalization guarantees. Roughly speaking, a mechanism is linearly delineable if for any set of buyers' values, the profit function is piecewise linear in the mechanism's parameters. We provide uniform convergence guarantees for linearly delineable mechanisms, showing how many samples are sufficient to ensure that uniformly for all mechanisms in a linearly delineable class, their empirical profit over the sample is close to the expected profit over the underlying distribution. We show numerous applications of our results to both pricing mechanisms (including posted-price mechanisms and multi-part tariffs), and auctions (including second price auctions with reserves and classes of affine maximizer auctions). This is based on joint work with Tuomas Sandholm and Ellen Vitercik.
  • 2:30 - 2:45
    Aug 20
    Short talk by Alex Peysakovich: Towards agents that can learn social conventions
    TBA
  • 2:45 - 3:00
    Aug 20
    Short talk by Aleck Johnsen: Dashboard Mechanisms for Online Marketplaces
    This paper gives a theoretical model for design and analysis of mechanisms for online marketplaces where a bidding dashboard enables the bid-optimization of long-lived agents. A key step in the mechanisms suggested by the paper is that the response of each agent can be inverted to ascertain the agent's value and from these values a nearly efficient outcome can be implemented. For any monotone and strictly continuous social choice rule, we give a dashboard mechanism and prove that the behavior of agents optimizing according to the dashboard in an iterated environment converges to a Nash equilibrium. Joint with Aleck Johnsen, Denis Nekipelov, Zihe Wang, and Onno Zoeter.
  • 3:00 - 3:30
    Aug 20
    Break: Refreshments provided by TTIC
  • 3:30 - 4:00
    Aug 20
    Discussion: This is an oppurtunity to pitch ideas and bring up open problems, before going into the collaboration time.
  • 4:00 - 5:00
    Aug 20
    Collaboration time
  • 5:00 - 5:45
    Aug 20
    Travel time to downtown
  • 6:00-9:00
    Aug 20
    Dinner: There wil be an optional and informal dinner at Niu B. We have made a tentative group reservation. If you are interested in attending please E-mail Nika Haghtalab to confirm your attendance.
  • 9:30 - 10:00
    Aug 21
    Breakfast: Provided by TTIC
  • 10:00 - 10:45
    Aug 21
    Talk by Costis Daskalakis: Improving Generative Adversarial Networks using Game Theory and Statistics
    TBA
  • 10:45 - 11:30
    Aug 21
    Talk by Aleksander Madry: Towards ML You Can Rely On
    Machine learning has made a significant progress over the last decade. In fact, many believe now that ML techniques are a “white bullet”, capable of making progress on any real-world problem they are applied to. But can we truly rely on this toolkit? In this talk, I will discuss one of the key challenges to making ML be dependable and secure: the widespread vulnerability of state-of-the-art classifiers to adversarial misclassification (aka adversarial examples). I will then describe a framework that enables us to reason about this vulnerability in a principled manner as well as develop methods for alleviating the problem it poses.
  • 11:30 - 11:45
    Aug 21
    Short talk by Gauthier Gidel: Negative Momentum for Improved Game Dynamics
    Games generalize the optimization paradigm by introducing different objective functions for different optimizing agents, known as players. Generative Adversarial Networks (GANs) are arguably the most popular game formulation in recent machine learning literature. GANs achieve great results on generating realistic natural images, however they are known for being difficult to train. Training them involves finding a Nash equilibrium, typically performed using gradient descent on the two players' objectives. Game dynamics can induce rotations that slow down convergence to a Nash equilibrium, or prevent it altogether. We provide a theoretical analysis of the game dynamics and show experimentally that gradient descent with a negative momentum term, can improve the convergence properties of some GANs.
  • 11:45 - 12:00
    Aug 21
    Short Talk by Jennifer Wortman Vaughan: The Disparate Impact of Strategic Manipulation
    TBA
  • 12:00 - 1:30
    Aug 21
    Lunch (on site): Lunch will be provided by TTIC.
  • 1:30 - 2:15
    Aug 21
    Talk by Zico Kolter: Provably robust deep learning
    In this talk I'll discuss our recent work on training deep classifiers that are provably robust to certain classes of norm-bounded perturbation attacks. Our methods work by considering a convex relaxation of the "adversarial polytope", the set of last-layer activations achievable under some norm-bounded perturbation of the input, and using these to derive very efficient methods for computing (and then minimizing) upper bounds the adversarial loss that can be suffered under such attacks. The method leads to some of the largest verified networks of which we are currently aware, including a convolutional MNIST classifier with a provable bound of 3.7% error under L_infinity perturbations of size epsilon=0.1. I'll relate our work to similar ongoing directions, and also discuss the main challenges that it faces: the task of scaling to significantly larger networks, e.g. at ImageNet scale, and the task of better characterizing the "correct" set of perturbations that we would like to be robust to.
  • 2:15 - 3:00
    Aug 21
    Talk by Thodoris Lykouris: Stochastic bandits robust to adversarial corruptions
    We introduce a new model of stochastic bandits with adversarial corruptions which aims to capture settings where most of the input follows a stochastic pattern but some fraction of it can be adversarially changed to trick the algorithm, e.g., click fraud, fake reviews, and email spam. The goal of this model is to encourage the design of bandit algorithms that (i) work well in mixed adversarial and stochastic models, and (ii) whose performance deteriorates gracefully as we move from fully stochastic to fully adversarial models. In our model, the rewards for all arms are initially drawn from a distribution and are then altered by an adaptive adversary. We provide a simple algorithm whose performance gracefully degrades with the total corruption the adversary injected in the data, measured by the sum across rounds of the biggest alteration the adversary made in the data in that round; this total corruption is denoted by C. Our algorithm provides a guarantee that retains the optimal guarantee (up to a logarithmic term) if the input is stochastic and whose performance degrades linearly to the amount of corruption C, while crucially being agnostic to it. We also provide a lower bound showing that this linear degradation is necessary if the algorithm achieves optimal performance in the stochastic setting (the lower bound works even for a known amount of corruption, a special case in which our algorithm achieves optimal performance without the extra logarithm). Based on joint work with Vahab Mirrokni and Renato Paes Leme that appeared at STOC 18.
  • 3:00 - 3:15
    Aug 21
    Short talk by Alex Slivkins: Bandits and Agents: How to Incentivize Exploration
    TBA
  • 3:15 - 3:30
    Aug 21
    Short talk by Jieming Mao: Multi-armed Bandit Problems with Strategic Arms
    We study a strategic version of the multi-armed bandit problem, where each arm is an individual strategic agent and we, the principal, pull one arm each round. When pulled, the arm receives some private reward v_a and can choose an amount x_a to pass on to the principal (keeping v_a-x_a for itself). All non-pulled arms get reward 0. Each strategic arm tries to maximize its own utility over the course of T rounds. Our goal is to design an algorithm for the principal incentivizing these arms to pass on as much of their private rewards as possible. When private rewards are stochastically drawn each round (v_a^t \leftarrow D_a), we show that: 1) Algorithms that perform well in the classic adversarial multi-armed bandit setting necessarily perform poorly: For all algorithms that guarantee low regret in an adversarial setting, there exist distributions D_1,...,D_k and an o(T)-approximate Nash equilibrium for the arms where the principal receives reward o(T). 2) There exists an algorithm for the principal that induces a game among the arms where each arm has a dominant strategy. Moreover, for every o(T)-approximate Nash equilibrium, the principal receives expected reward \mu'T - o(T), where \mu' is the second-largest of the means E[D_a]. This algorithm maintains its guarantee if the arms are non-strategic (x_a = v_a), and also if there is a mix of strategic and non-strategic arms.
  • 3:30 - 4:00
    Aug 21
    Break: Refreshments will be provided by TTIC
  • 4:00 - 5:00
    Aug 21
    Discussion on AGT and GANs
  • 6:00 - 9:00
    Aug 21
    Informal social outings: Depending on the weather, some groups may decide to go for a walk by the river, go to a roof top bar, or watch a movie at the millinium park. See other fun options.
  • 9:30 - 10:00
    Aug 22
    Breakfast: Provided by TTIC
  • 10:00 - 10:45
    Aug 22
    Talk by Nihar Shah: Battling Demons in Peer Review
    Peer review is the backbone of scholarly research. It is however faced with a number of challenges (or demons) such as bias and miscalibration, subjectivity, strategic behavior, and noise. This talk will present some principled and practical approaches to battle these demons in peer review.
  • 10:45 - 11:30
    Aug 22
    Talk by Nikhil Devanur: Truth and Regret in Online Scheduling
    We consider a scheduling problem where a cloud service provider has multiple units of a resource available over time. Selfish clients submit jobs, each with an arrival time, deadline, length, and value. The service provider's goal is to implement a truthful online mechanism for scheduling jobs so as to maximize the social welfare of the schedule. Recent work shows that under a stochastic assumption on job arrivals, there is a single-parameter family of mechanisms that achieves near-optimal social welfare. We show that given any such family of near-optimal online mechanisms, there exists an online mechanism that in the worst case performs nearly as well as the best of the given mechanisms. Our mechanism is truthful whenever the mechanisms in the given family are truthful and prompt, and achieves optimal (within constant factors) regret. Joint work with Shuchi Chawla, Janardhan Kulkarni and Rad Niazadeh.
  • 11:30 - 11:45
    Aug 22
    Short Talk by Bo Waggoner:Market-Based Mechanisms for Acquiring and Aggregating Data
    TBA
  • 11:45 - 12:00
    Aug 22
    Short talk by Jason Hartline: Inference from Auction Outcomes
    Econometric inference allows an analyst to back out the values of agents in a mechanism from the rules of the mechanism and bids of the agents. This paper proposes the problem of infering the values of agents in a mechanism from the social choice function implemented by the mechanism and the mechanism's outcome (specifically: per-unit prices) when the actions are not observed. For single-dimensional agents, this inference problem is a multi-dimensional inversion of the payment identity and is feasible only if the payment identity is uniquely invertible. The inversion is unique for single- and multi-unit proportional weights social choice functions (common, for example, in the bandwidth allocation); and its inverse can be found efficiently. This inversion is not unique for social choice functions that exhibit complementarities. Joint with Aleck Johnsen, Denis Nekipelov, and Zihe Wang.
  • 12:00 - 2:00
    Aug 22
    Lunch: On your own, see some local options.
  • 2:00 - 3:00
    Aug 22
    Impromptu Talks/Time for collaboration: To give an impromptu talk, check in with Nika Haghtalab, preferably before lunch on August 22.
  • 3:00 - 3:30
    Aug 22
    Break: Refreshments will be provided by TTIC.
  • 3:30 - 4:50
    Aug 22
    Time for collaboration
  • 4:50 - 5:00
    Aug 22
    Closing Remarks

Organizing Committee

Jenn Wortman Vaughan

Microsoft Research, New York City

+ Follow

Nika Haghtalab

Computer Science Department, Carnegie Mellon University

+ Follow

Yishay Mansour

School of Computer Science, Tel Aviv University

+ Follow

Tim Roughgarden

Computer Science, Stanford University

+ Follow

Vasilis Syrgkanis

Microsoft Research, New England

+ Follow