Workshop on Individualized Decision-Making

UC Berkeley, July 18-19, 2024


All meetings will be held in Soda Hall 510. The conference room is off the main corridor on the 5th floor.


Agenda

Thursday, July 18

Friday, July 19


Participants


Sessions

Implementation Challenges of Individualized Decisions. Most research in the human-facing sciences is evaluated through randomized trials or big-data benchmarking, but these tools seem ill-suited to decisions about individuals. Some relevant questions include but are not limited to: What have been your experiences in evaluating, implementing, and deploying projects aimed at empowering individualized decisions without these statistical evaluations? Is there a path to data-set benchmarking and frictionless reproducibility when studying non-statistical problems? Since data about individuals is usually ensnared by privacy embargoes, how do we share results with other researchers? What are the best ways to get these tools into the hands of people (through healthcare systems, directly to care providers, as standalone apps for people, etc.)? How can we build a broader, multidisciplinary research community on individualized decision-making?

Is there such a thing as individualized decisions? All decisions are informed by past experience, so in what sense are decisions made about people individualized? This session explores what we might want from individualized decisions and how to think of these in the broader context of populations of individuals. Some relevant questions include but are not limited to: What is the appropriate reference class to evaluate the quality of an individual decision? What does it mean to say there is an imperative to treat people "as individuals” in the context of anti-discrimination inquiries? What do fairness and justice mean when deciding about an individual person? Is normativity inherently statistical?

Frameworks for Evaluating Individual Decisions. One of the trickier parts of decision-making about individuals, is we usually evaluate these decisions on average over some population of individuals. This inherently statistical evaluation biases our algorithmic decisions. This session explores alternative means of evaluating decisions and the algorithmic and statistical consequences of such choices. Some relevant questions include but are not limited to: How can we evaluate decision systems beyond statistical measures of average return or regret? How can uncertainty quantification be evaluated and incorporated into the evaluation of decisions? How does changing the evaluation criteria qualitatively change policies? For which evaluation criteria can optimal policies be algorithmically computed?

Multiple Measurements, Multiple People, Multiple Modalities. For individualized decisions, we still want to learn from other people’s experiences. This session investigates how to use such information to inform decisions about particular individuals. Some relevant questions include but are not limited to: Can multiple past n-of-1 trials help inform how to run the next n-of-1 trial? Can sequence prediction techniques help to learn individualized models of human behavior? Conversely, can n-of-1 studies resolve questions about confounding in other observational studies?

[back to agenda]