Xin Lyu

xinlyu (at) berkeley (dot) edu

634 Soda Hall, Berkeley, CA

About Me

I am a final-year (2021-2026) Ph.D. student in the CS Theory Group of the EECS Department at UC Berkeley, where I am fortunate to be advised by Avishay Tal and Jelani Nelson. The last two years of my PhD study (2024-2026) are gratefully supported by a Google PhD Fellowship.

View my CV here.

Research Interests

I mainly work on the theory of Privacy-Preserving Machine Learning (PPML) and Data Analysis , mostly through the lens of Differential Privacy. My research covers both private algorithm design and lower bounds, soliciting motivation from both theory and practice. I also explore the connection between privacy and other considerations in machine learning, such as robustness, generalization, and memorization.

I also maintain a broad interests in CS theory: I have worked on a diverse set of topics, incuding pseudorandomness, computational learning theory, communication complexity and combinatorics.

Experiences

Previously, I did my undergrad at the Institute for Interdisciplinary Information Sciences (aka. Yao Class) at Tsinghua University. During the Spring and Summer of 2020, I was lucky to visit MIT and work on complexity theory with Lijie Chen and Ryan Williams.

From Summer 2022 to May 2024, I was a student researcher at Google Research (Mountain View), where I worked under the guidence of Edith Cohen, Jelani Nelson, Tamás Sarlós and Uri Stemmer.

In the Summers of 2024 and 2025, I was a machine learning research intern at Apple Machine Learning Research (MLR), where I worked under the mentorship of Kunal Talwar and Vitaly Feldman.

I am on the job market of 2025 - 2026!


Manuscripts

Ongoing projects in progress. Stay tuned!

Publications

Private Learning of Littlestone Classes, Revisited [arXiv]
Xin Lyu
To appear in STOC 2026

Hot PATE: Private Aggregation of Distributions for Diverse Tasks [arXiv]
Edith Cohen, Benjamin Cohen-Wang, Xin Lyu, Jelani Nelson, Tamás Sarlós, Uri Stemmer
To appear in ICLR 2026

Cell-Probe Lower Bounds via Semi-Random CSP Refutation: Simplified and the Odd-Locality Case [arXiv]
Venkatesan Guruswami, Xin Lyu, Weiqiang Yuan
SODA 2026

More efficient sifting for grid norms, and applications to multiparty communication complexity [arXiv]
Zander Kelley, Xin Lyu
FOCS 2025

Trade-Offs in Data Memorization via Strong Data Processing Inequalities [arXiv]
Guy Kornowski, Vitaly Feldman, Xin Lyu
COLT 2025 (also Spotlight Talk at TPDP 2025, and Best Paper Award at MemFM Workshop@ICML2025)

Fingerprinting Codes Meet Geometry: Improved Lower Bounds for Private Query Release and Adaptive Data Analysis [arXiv]
Xin Lyu, Kunal Talwar
STOC 2025 (also Spotlight Talk at TPDP 2025, and Non-Archival Talk at FORC 2025)

Lower Bounds for Differential Privacy Under Continual Observation and Online Threshold Queries [arXiv]
Edith Cohen, Xin Lyu, Jelani Nelson, Tamás Sarlós, Uri Stemmer
COLT 2024 (also Spotlight Talk at TPDP 2024)

The Cost of Parallelizing Boosting [arXiv]
Xin Lyu, Hongxun Wu, Junzhao Yang
SODA 2024

The Target-Charging Technique for Privacy Analysis Across Interaction Computations [arXiv]
Edith Cohen, Xin Lyu
NeurIPS 2023

Tight Time-Space Lower Bounds for Constant-Pass Learning [arXiv]
Xin Lyu, Avishay Tal, Hongxun Wu, Junzhao Yang
FOCS 2023

Weighted Pseudorandom Generators via Inverse Analysis of Random Walks and Shortcutting [ECCC]
Lijie Chen, William Hoza, Xin Lyu, Avishay Tal, Hongxun Wu,
FOCS 2023

New PRGs for Unbounded-width/Adaptive-order Read-Once Branching Programs [DROPS]
Lijie Chen, Xin Lyu, Avishay Tal, Hongxun Wu
ICALP 2023

Õptimal Differentially Private Learning of Thresholds and Quasi-Concave Optimization [arXiv]
Edith Cohen, Xin Lyu, Jelani Nelson, Tamás Sarlós, Uri Stemmer
STOC 2023

Generalized Private Selection and Testing with High Confidence [arXiv]
Edith Cohen, Xin Lyu, Jelani Nelson, Tamás Sarlós, Uri Stemmer
ITCS 2023

Time-Space Tradeoffs for Element Distinctness and Set Intersection via Pseudorandomness [arXiv]
Xin Lyu, Weihao Zhu
SODA 2023

Composition Theorems for Interactive Differential Privacy [arXiv]
Xin Lyu
NeurIPS 2023
(Note: See also this independent and concurrent work by Vadhan and Zhang.)

Range Avoidance for Low-depth Circuits and Connections to Pseudorandomness [ECCC]
Venkatesan Guruswami, Xin Lyu, Xiuhan Wang
RANDOM 2022

On the Robustness of CountSketch to Adaptive Inputs [arXiv]
Edith Cohen, Xin Lyu, Jelani Nelson, Tamás Sarlós, Moshe Shechner, Uri Stemmer
ICML 2022

Improved Pseudorandom Generators for AC^0 Circuits [pdf] [ECCC]
Xin Lyu
CCC 2022 (co-winner of best student paper; invited to the ToC special issue)

Majority vs. Approximate Linear Sum and Average-Case Complexity Below NC^1 [pdf] [ECCC] [DROPS] [My virtual talk at ICALP]
Lijie Chen, Zhenjian Lu, Xin Lyu, Igor C. Oliveira
ICALP 2021

Inverse-Exponential Correlation Bounds and Extremely Rigid Matrices from a New Derandomized XOR Lemma [pdf] [ECCC] [My virtual talk at STOC]
Lijie Chen, Xin Lyu
STOC 2021

Almost-Everywhere Circuit Lower Bounds from Non-Trivial Derandomization [pdf] [ECCC] [My virtual talk at FOCS]
Lijie Chen, Xin Lyu, Ryan Williams
FOCS 2020


Personal

When I am not working, I usually go cycling with a pretty Specialized bike (Tarmac SL8). Check out my unimpressive Strava page. Even more personal: I am happily married to Xinwei Li and we have a lovely child.

This page was last updated on Feb/04/2026