Publications
2024

Contract design with safety inspections.
A. Fallah, and M. I. Jordan.
ACM Conference on Economics and Computation (EC), New Haven, CT, 2024.

Privacy can arise endogenously in an economic system with learning agents.
T. Ding, N. Ananthakrishnan, M. Werner, P. Karimireddy, and M. I. Jordan.
Symposium on Foundations of Responsible Computing (FORC), 2024.

Chatbot Arena: An open platform for evaluating LLMs by human preference.
W.L. Chiang, L. Zheng, Y. Sheng, A. N. Angelopoulos, T. Li, D. Li, H. Zhang, B. Zhu, M. I. Jordan, J. Gonzalez, and I. Stoica.
In A. Weller, K. Heller, N. Oliver and Z. Kolter (Eds.),
International Conference on Machine Learning (ICML), 2024.

Incentivized learning in principalagent bandit games.
A. Scheid, D. Tiapkin, E. Boursier, A. Capitaine, E. M. El Mhamdi, E. Moulines,
M. I. Jordan, and A. Durmus.
In A. Weller, K. Heller, N. Oliver and Z. Kolter (Eds.),
International Conference on Machine Learning (ICML), 2024.

Iterative data smoothing: Mitigating reward overfitting and overoptimization in RLHF.
B. Zhu, M. I. Jordan, and J. Jiao.
In A. Weller, K. Heller, N. Oliver and Z. Kolter (Eds.),
International Conference on Machine Learning (ICML), 2024.

Collaborative heterogeneous causal inference beyond metaanalysis.
T. Guo, P. Karimireddy, and M. I. Jordan. (2024).
In A. Weller, K. Heller, N. Oliver and Z. Kolter (Eds.),
International Conference on Machine Learning (ICML), 2024.

AutoEval done right: Using synthetic data for model evaluation.
P. Boyeau, A. N. Angelopoulos, N. Yosef, J. Malik, and M. I. Jordan.
arxiv.org/abs/2403.07008, 2024.

Data acquisition via experimental design for decentralized data markets.
C. Lu, B. Huang, S. P. Karimireddy, P. Vepakomma, M. I. Jordan, and R. Raskar.
arxiv.org/abs/2403.13893, 2024.

On threelayer data markets.
A. Fallah, M. I. Jordan, A. Makhdoumi, and A. Malekian.
arxiv.org/abs/2402.09697, 2024.

Information elicitation in agency games.
S. Wang, M. I. Jordan, K. Ligett, and R. P. McAfee.
arxiv.org/abs/2402.14005, 2024.

The limits of price discrimination under privacy constraints.
A. Fallah, M. I. Jordan, A. Makhdoumi, and A. Malekian.
arxiv.org/abs/2402.08223, 2024.

Conformal triage for medical imaging AI deployment.
A. Angelopoulos, S. Pomerantz, S. Do, S. Bates, C. Bridge, D. Elton, M. Lev, R. G. Gonzalez,
M. I. Jordan, and J. Malik.
medrxiv.org/content/10.1101/2024.02.09.24302543v1, 2024.

Perseus: A simple highorder regularization method for variational inequalities.
T. Lin and M. I. Jordan.
Mathematical Programming, https://doi.org/10.1007/s10107024020752, 2024.

On learning rates and Schrödinger operators.
B. Shi, W. Su, and M. I. Jordan.
Journal of Machine Learning Research, 24, 153, 2024.

Instancedependent confidence and early stopping for reinforcement learning.
K. Khamaru, E. Xia, M. Wainwright, and M. I. Jordan.
Journal of Machine Learning Research, 24, 143, 2024.

Conformal decision theory: Safe autonomous decisions from imperfect predictions.
J. Lekeufack, A. Angelopoulos, A. Bajcsy, M. I. Jordan, and J. Malik.
IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 2024.

Towards optimal statistical watermarking.
B. Huang, H. Zhu, B. Zhu, K. Ramchandran, M. I. Jordan, J. Lee, and J. Jiao.
arxiv.org/abs/2312.07930, 2024.

Operationalizing counterfactual metrics: Incentives, ranking, and information asymmetry.
S. Wang, S. Bates, P. M. Aronow, and M. I. Jordan.
Proceedings of the TwentySeventh Conference on Artificial Intelligence and
Statistics (AISTATS), 2024.

Classifier calibration with ROCregularized isotonic regression.
E. Berta, F. Bach, and M. I. Jordan.
Proceedings of the TwentySeventh Conference on Artificial Intelligence and
Statistics (AISTATS), 2024.

Delegating data collection in decentralized machine learning.
N. Ananthakrishnan, S. Bates, M. I. Jordan, and N. Haghtalab.
Proceedings of the TwentySeventh Conference on Artificial Intelligence and
Statistics (AISTATS), 2024.

A specialized semismooth Newton method for kernelbased optimal transport.
T. Lin, M. Cuturi, and M. I. Jordan.
Proceedings of the TwentySeventh Conference on Artificial Intelligence and
Statistics (AISTATS), 2024.

A primaldual method for solving variational inequalities with general constraints.
T. Chavdarova, M. Pagliardini, T. Yang, and M. I. Jordan.
International Conference on Learning Representations (ICLR), 2024.

A continuoustime perspective on optimal methods for monotone equation problems.
T. Lin and M. I. Jordan.
Communications in Optimization Theory, to appear.

A diffusion process perspective on posterior contraction rates for parameters.
W. Mou, N. Ho, M. Wainwright, P. Bartlett, and M. I. Jordan.
SIAM Journal on the Mathematics of Data Science, to appear.

Adaptive, doubly optimal noregret learning in games with gradient feedback.
M. I. Jordan, T. Lin, and Z. Zhou.
Operations Research, to appear.

Reinforcement learning with heterogeneous data: Estimation and inference.
E. Chen, R. Song, and M. I. Jordan.
Journal of the American Statistical Association, to appear.
2023

A quadratic speedup in finding Nash equilibria of quantum zerosum games.
F. Vasconcelos, E.V. VlatakisGkaragkounis, P. Mertikopoulos, G. Piliouras, and M. I. Jordan.
arxiv.org/abs/2311.10859, 2023.

Predictionpowered inference.
A. Angelopoulos, S. Bates, C. Fannjiang, M. I. Jordan, and T. Zrnic.
Science, 382, 669674, 2023.
[arXiv version]

Postselection inference via algorithmic stability.
T. Zrnic and M. I. Jordan.
Annals of Statistics, 51, 16661691, 2023.

A unifying perspective on multicalibration: Game dynamics for multiobjective learning.
N. Haghtalab, M. I. Jordan, and E. Zhao.
In A. Globerson, K. Saenko, M. Hardt, and S. Levine (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 36, 2023.

Improved Bayes risk can yield reduced social welfare under competition.
M. Jagadeesan, M. I. Jordan, J. Steinhardt, and N. Haghtalab.
In A. Globerson, K. Saenko, M. Hardt, and S. Levine (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 36, 2023.

Classconditional conformal prediction with many classes.
T. Ding, A. Angelopoulos, S. Bates, M. I. Jordan, and R. Tibshirani.
In A. Globerson, K. Saenko, M. Hardt, and S. Levine (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 36, 2023.

Optimal extragradientbased algorithms for stochastic variational inequalities
with separable structure.
A. Yuan, J. Li, G. Gidel, M. I. Jordan, Q. Gu, and S. Du.
In A. Globerson, K. Saenko, M. Hardt, and S. Levine (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 36, 2023.

On learning necessary and sufficient causal graphs.
H. Cai, Y. Wang, M. I. Jordan, and R. Song.
In A. Globerson, K. Saenko, M. Hardt, and S. Levine (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 36, 2023.

Doubly robust selftraining.
B. Zhu, M. Ding, P. Jacobson, M. Wu, W. Zhan, M. I. Jordan, and J. Jiao.
In A. Globerson, K. Saenko, M. Hardt, and S. Levine (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 36, 2023.

On optimal caching and model multiplexing for large model inference.
B. Zhu, Y. Sheng, L. Zheng, C. Barett, M. I. Jordan, and J. Jiao.
In A. Globerson, K. Saenko, M. Hardt, and S. Levine (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 36, 2023.

Skilful nowcasting of extreme precipitation with NowcastNet.
Y. Zhang, M. Long, K. Chen, L. Xing, R. Jin, M. I. Jordan, and J. Wang.
Nature, 619(7970), 526532, 2023.

A gentle introduction to gradientbased optimization and variational inequalities for
machine learning.
N. Wadia, Y. Dandi, and M. I. Jordan.
arxiv.org/abs/2309.04877, 2023.

Incentivetheoretic Bayesian inference for collaborative science.
S. Bates, M. I. Jordan, M. Sklar, and J. A. Soloff.
arxiv.org/abs/2307.03748, 2023.

ScaffPD: Communicationefficient fair and robust federated learning.
Y. Yu, S. P. Karimireddy, Y. Ma, and M. I. Jordan.
arxiv:2307.13381, 2023.

Curvatureindependent lastiterate convergence for games on Riemannian manifold.
Y. Cai, M. I. Jordan, T. Lin, A. Oikonomou, and E.V. VlatakisGkaragkounis.
arXiv:2306.16617, 2023.

Accelerating inexact hypergradient descent for bilevel optimization.
H. Yang, L. Luo, J. Li, and M. I. Jordan.
arXiv:2307.00126, 2023.

Provably personalized and robust federated learning.
M. Werner, L. He, S. P. Karimireddy, M. I. Jordan, and M. Jaggi.
Transactions of Machine Learning Research, 2023.

Evaluating and incentivizing diverse data contributions in collaborative learning.
B. Huang, S. P. Karimireddy, and M. I. Jordan.
arXiv:2306.05592, 2023.

Incentivizing highquality content in online recommender systems.
X. Hu, M. Jagadeesan, M. I. Jordan, and Jacob Steinhardt.
arXiv:2306.07479, 2023.

Finetuning language models with advantageinduced policy alignment.
B. Zhu, H. Sharma, F. Vieira Frujeri, S. Ding, C. Zhu, M. I. Jordan, and J. Jiao.
arXiv:2306.02231, 2023.

On optimal caching and model multiplexing for large model inference.
B. Zhu, Y. Sheng, L. Zheng, C. Barrett, M. I. Jordan, and J. Jiao.
arXiv:2306.02003, 2023.

Deterministic nonsmooth nonconvex optimization.
M. I. Jordan, G. Kornowski, T. Lin, O. Shamir, and E. Zampetakis.
In G. Neu and L. Rosasco (Eds.),
Proceedings of the ThirtySixth Conference on Learning Theory (COLT),
Bengalaru, India, 2023.

Online learning in a creator economy.
B. Zhu, S. P. Karimireddy, J. Jiao, and M. I. Jordan.
arXiv:2305.11381, 2023.

The sample complexity of online contract design.
B. Zhu, S. Bates, Z. Yang, Y. Wang, J. Jiao, and M. I. Jordan.
In J. Hartline and L. Samuelson (Eds.),
ACM Conference on Economics and Computation (EC), London, UK, 2023.

Lastiterate convergence of saddle point optimizers via highresolution differential equations.
T. Chavdarova, M. I. Jordan, and E. Zampetakis.
Minimax Theory and its Applications, 8, 333380, 2023.

Bayesian robustness: A nonasymptotic viewpoint.
K. Bhatia, YA. Ma, A. Dragan, P. Bartlett, and M. I. Jordan.
Journal of the American Statistical Association,
doi.org/10.1080/01621459.2023.2174121, 2023.

Recommendation systems with distributionfree reliability guarantees.
A. Angelopoulos, K. Krauth, S. Bates, Y. Wang, and M. I. Jordan.
In H. Papadopoulos and K. An (Eds.),
12th Symposium on Conformal and Probabilistic Prediction with Applications (COPA),
Limassol, Cyprus, 2023. [Alexey Chervonenkis Best Paper Award].

Federated conformal predictors for distributed uncertainty quantification.
C. Lu, Y. Yu, S. P. Karimireddy, M. I. Jordan, and R. Raskar.
In B. Engelhardt, E. Brunskill, and K. Cho (Eds.),
International Conference on Machine Learning (ICML), 2023.

Principled reinforcement learning with human feedback from pairwise or Kwise comparisons.
B. Zhu, J. Jiao, and M. I. Jordan.
In B. Engelhardt, E. Brunskill, and K. Cho (Eds.),
International Conference on Machine Learning (ICML), 2023.

Nesterov meets optimism: Rateoptimal optimisticgradientbased method for
stochastic bilinearlycoupled minimax optimization.
J. Li, A. Yuan, G. Gidel, and M. I. Jordan.
In B. Engelhardt, E. Brunskill, and K. Cho (Eds.),
International Conference on Machine Learning (ICML), 2023.

Nonconvex stochastic scaledgradient descent and generalized eigenvector problems.
J. Li and M. I. Jordan.
In R. Evans and I. Shpitser (Eds.), Uncertainty in Artificial Intelligence (UAI),
Proceedings of the ThirtyNinth Conference, Pittsburgh, PA, 2023.

Cilantro: A framework for performanceaware resource allocation for general
objectives via online feedback.
R. Bhardwaj, K. Kandasamy, A. Biswal, W. Guo, B. Hindman, J. Gonzalez, M. I. Jordan, and I. Stoica.
17th USENIX Symposium on Operating Systems Design and Implementation (OSDI),
Boston, MA, 2023.

MultiVI: deep generative model for the integration of multimodal data.
T. Ashuach, M. Gabitto, R. Koodli, M. I. Jordan, G.A. Saldi, and N. Yosef.
Nature Methods, 20, 1222–1231, 2023.

Accelerated firstorder optimization under nonlinear constraints.
M. Muehlebach and M. I. Jordan.
arXiv:2302.00316, 2023.

An empirical Bayes method for differential expression analysis of
single cells with deep generative models.
P. Boyeau, J. Regier, A. Gayoso, M. I. Jordan, R. Lopez, and N. Yosef.
Proceedings of the National Academy of Sciences, 10.1073/pnas.2209124120, 2023.

VCG mechanism design with unknown agent values under stochastic bandit feedback.
K. Kandasamy, J. Gonzalez, M. I. Jordan, and I. Stoica.
Journal of Machine Learning Research, 24, 145, 2023.

A Bayesian perspective on convolutional neural networks through a
deconvolutional generative model.
T. Nguyen, N. Ho, A. Patel, A. Anandkumar, M. I. Jordan, and R. G. Baraniuk.
Journal of Machine Learning Research, to appear.

Online learning in Stackelberg games with an omniscient follower.
G. Zhao, B. Zhu, J. Jiao, and M. I. Jordan.
arXiv:2301.11518, 2023.

Neural dependencies emerging from learning massive categories.
R. Feng, K. Zheng, K. Zhu, Y. Shen, J. Zhao, Y. Huang, D. Zhao, J. Zhou, M. I. Jordan, and ZJ. Zha.
arXiv:2301.12339, 2023.

On the complexity of deterministic nonsmooth and nonconvex optimization.
M. I. Jordan, T. Lin, and E. Zampetakis.
arXiv:2301.12463, 2023.

Competition, alignment, and equilibria in digital marketplaces.
M. Jagadeesan, M. I. Jordan, and N. Haghtalab.
ThirtySeventh AAAI Conference on Artificial Intelligence (AAAI23), 2023.

Firstorder algorithms for nonlinear generalized Nash equilibrium problems.
M. I. Jordan, T. Lin, and E. Zampetakis.
Journal of Machine Learning Research, 24, 146, 2023.

Learning equilibria in matching markets with bandit feedback.
M. Jagadeesan, A. Wei, Y. Wang, M. I. Jordan, and J. Steinhardt.
Journal of the ACM, https://doi.org/10.1145/3583681, 2023.

Solving constrained variational inequalities via a firstorder interior pointbased method.
T. Yang, M. I. Jordan, and T. Chavdarova.
International Conference on Learning Representations (ICLR), 2023.

A general framework for sampleefficient function approximation in reinforcement learning.
Z. Chen, J. Li, A. Yuan, Q. Gu, and M. I. Jordan.
International Conference on Learning Representations (ICLR), 2023.

Modeling content creator incentives on algorithmcurated platforms.
J. Hron, K. Krauth, N. Kilbertus, M. I. Jordan, and S. Dean.
International Conference on Learning Representations (ICLR), 2023.

Byzantinerobust federated learning with optimal statistical rates and privacy guarantees.
B. Zhu, L. Wang, Q. Pang, J. Jiao, D. Song, and M. I. Jordan.
Proceedings of the TwentySixth Conference on Artificial Intelligence and
Statistics (AISTATS), 2023.

A statistical analysis of PolyakRuppertaveraged Qlearning.
X. Li, W. Yang, X. Liang, Z. Zhang, and M. I. Jordan.
Proceedings of the TwentySixth Conference on Artificial Intelligence and
Statistics (AISTATS), 2023.

Finding regularized competitive equilibria of heterogeneous agent macroeconomic
models via reinforcement learning.
R. Xu, Y. Min, T. Wang, M. I. Jordan, Z. Wang, and Z. Yang.
Proceedings of the TwentySixth Conference on Artificial Intelligence and
Statistics (AISTATS), 2023.

An instancedependent analysis for the cooperative multiplayer multiarmed bandit.
A. Pacchiano, P. Bartlett, and M. I. Jordan.
Algorithmic Learning Theory (ALT), 2023.

Evaluating sensitivity to the stickbreaking prior in Bayesian nonparametrics.
R. Giordano, R. Liu, M. I. Jordan, and T. Broderick.
Bayesian Analysis, 18, 287366, 2023.

Can reinforcement learning find StackelbergNash equilibria in generalsum
Markov games with myopic followers?
H. Zhong, Z. Yang, Z. Wang, and M. I. Jordan.
Journal of Machine Learning Research, to appear.

Monotone inclusions, acceleration and closedloop control.
T. Lin and M. I. Jordan.
Mathematics of Operations Research, https://doi.org/10.1287/moor.2022.1343, 2023.

Provably efficient reinforcement learning with linear function approximation.
C. Jin, Z. Yang, Z. Wang, and M. I. Jordan.
Mathematics of Operations Research, https://doi.org/10.1287/moor.2022.1309, 2023.

Local exchangeability.
T. Campbell, S. Syed, C.Y. Yang, M. I. Jordan, and T. Broderick.
Bernoulli, 29, 20842100, 2023.
2022

Incentiveaware recommender systems in twosided markets.
X. Dai, Y. Qi, and M. I. Jordan.
arXiv:2211.15381, 2022.

Valid inference after causal discovery.
P. Gradu, T. Zrnic, Y. Wang, and M. I. Jordan.
arxiv.org/abs/2208.05949, 2022.

Empirical Gateaux derivatives for causal inference.
M. I. Jordan, Y. Wang, and A. Zhou.
In A. Agarwal, A. Oh, D. Belgrave, and and K. Cho (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 35. 2022.

Ondemand sampling: Learning optimally from multiple distributions.
N. Haghtalab, M. I. Jordan, and E. Zhao.
In A. Agarwal, A. Oh, D. Belgrave, and and K. Cho (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 35. 2022.
[Outstanding Paper Award].

Learning twoplayer Markov games: Neural function approximation and correlated equilibrium.
J. Li, D. Zhou, Q. Gu, and M. I. Jordan.
In A. Agarwal, A. Oh, D. Belgrave, and and K. Cho (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 35. 2022.

Offpolicy evaluation with policydependent optimization response.
W. Guo, M. I. Jordan, and A. Zhou.
In A. Agarwal, A. Oh, D. Belgrave, and and K. Cho (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 35. 2022.

Learn to match with no regret: Reinforcement learning in Markov matching markets.
Y. Min, T. Wang, R. Xu, Z. Wang, M. I. Jordan, and Y. Wang.
In A. Agarwal, A. Oh, D. Belgrave, and and K. Cho (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 35. 2022.

Firstorder algorithms for minmax optimization in geodesic metric spaces.
M. I. Jordan, T. Lin, and EV. VlatakisGkaragkounis.
In A. Agarwal, A. Oh, D. Belgrave, and and K. Cho (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 35. 2022.

Gradientfree methods for deterministic and stochastic nonsmooth nonconvex optimization.
T. Lin, Z. Zheng, and M. I. Jordan.
In A. Agarwal, A. Oh, D. Belgrave, and and K. Cho (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 35. 2022.

TCT: Convexifying federated learning using bootstrapped neural tangent kernels.
Y. Yu, A. Wei, S. P. Karimireddy, Y. Ma, and M. I. Jordan.
In A. Agarwal, A. Oh, D. Belgrave, and and K. Cho (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 35. 2022.

Robust calibration with multidomain temperature scaling.
Y. Yu, S. Bates, Y. Ma, and M. I. Jordan.
In A. Agarwal, A. Oh, D. Belgrave, and and K. Cho (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 35. 2022.

Rank diminishing in deep neural networks.
R. Feng, K. Zheng, Y. Huang, D. Zhao, M. I. Jordan, and Z.J. Zhao.
In A. Agarwal, A. Oh, D. Belgrave, and and K. Cho (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 35. 2022.

Deep generative modeling for quantifying samplelevel heterogeneity in singlecell omics.
P. Boyeau, J. Hong, A. Gayoso, M. I. Jordan, E. Azizi, and N. Yosef.
Bioarxiv, 2022.

Conformal prediction under feedback covariate shift for biomolecular design.
C. Fannjiang, S. Bates, A. Angelopoulos, J. Listgarten, and M. I. Jordan.
Proceedings of the National Academy of Sciences,
https://doi.org/10.1073/pnas.2204569119, 2022.

Explicit secondorder minmax optimization methods with optimal convergence guarantee.
T. Lin, P. Mertikopoulos, and M. I. Jordan.
arxiv.org/abs/2210.12860, 2022.

Multisource causal inference using control variates.
W. Guo, S. Wang, P. Ding, Y. Wang, and M. I. Jordan.
Transactions on Machine Learning Research, https://openreview.net/forum?id=CrimIjBa64, 2022.

On constraints in firstorder optimization: A view from nonsmooth dynamical systems.
M. Muehlebach and M. I. Jordan.
Journal of Machine Learning Research, 23, 147, 2022.

Instability, computational efficiency and statistical accuracy.
N. Ho, K. Khamaru, R. Dwivedi, M. Wainwright, M. I. Jordan, and B. Yu.
Journal of Machine Learning Research, 23, 181, 2022.

A nonasymptotic analysis of gradient descent ascent for nonconvexconcave minimax problems.
T. Lin, C. Jin, and M. I. Jordan.
https://ssrn.com/abstract=4181867, 2022.

Learning twoplayer mixture Markov games: Kernel function approximation
and correlated equilibrium.
J. Li, D. Zhou, Q. Gu, and M. I. Jordan.
arxiv.org/abs/2208.05363, 2022.

A reinforcement learning approach in multiphase secondprice auction design.
R. Ai, B. Lyu, Y. Wang, Z. Yang, and M. I. Jordan.
arxiv.org/abs/2210.10278, 2022.

Principalagent hypothesis testing.
S. Bates, M. I. Jordan, M. Sklar, and J. A. Soloff.
arxiv.org/abs/2205.06812, 2022.

Optimal extragradientbased bilinearlycoupled saddlepoint optimization.
S. Du, G. Gidel, M. I. Jordan, and J. Li.
arxiv.org/abs/2206.08573, 2022.

Mechanisms that incentivize data sharing in federated learning.
S. P. Karimireddy, W. Guo, and M. I. Jordan.
arxiv.org/abs/2207.04557, 2022.
[Outstanding Paper Award].

Continuoustime analysis for variational inequalities: An overview and desiderata.
T. Chavdarova, Y.P. Hsieh, and M. I. Jordan.
arxiv.org/abs/2207.07105, 2022.

Breaking feedback loops in recommender systems with causal inference.
K. Krauth, Y. Wang, and M. I. Jordan.
arxiv.org/abs/2207.01616, 2022.

NumS: Scalable array programming for the cloud.
M. Elibol, V. Benara, S. Yagati, L. Zheng, A. Cheung, M. I. Jordan, and I. Stoica.
arxiv.org/abs/2206.14276, 2022.

Online nonsubmodular minimization with delayed costs: From full information to bandit feedback.
T. Lin, A. Pacchiano, Y. Yu, and M. I. Jordan.
arxiv.org/abs/2205.07217, 2022.

The sky above the clouds.
S. Chasins, A. Cheung, N. Crooks, A. Ghodsi, K. Goldberg, J. E. Gonzalez, J. M. Hellerstein, M. I. Jordan, A. D. Joseph, M. Mahoney, A. Parameswaran, D. Patterson, R. A. Popa, K. Sen, S. Shenker, D. Song, and I. Stoica.
arxiv.org/abs/2205.07147, 2022.

Optimal mean estimation without a variance.
Y. Cherapanamjeri, N. Tripuraneni, P. Bartlett, and M. I. Jordan.
Proceedings of the ThirtyFifth Conference on Learning Theory (COLT), 2022.

ROOTSGD: Sharp nonasymptotics and asymptotic efficiency in a single algorithm.
J. Li, W. Mou, M. Wainwright, and M. I. Jordan.
Proceedings of the ThirtyFifth Conference on Learning Theory (COLT), 2022.

Imagetoimage regression with distributionfree uncertainty quantification and
applications in imaging.
A. Angelopoulos, A. Kohli, S. Bates, M. I. Jordan, J. Malik, T. Alshaabi,
S. Upadhyayula, and Y. Romano.
In In C. Szepesvari, L. Song and S. Jegelka (Eds.),
International Conference on Machine Learning (ICML), 2022.

Noregret learning in partiallyinformed auctions.
W. Guo, M. I. Jordan, and E. Vitercik.
In In C. Szepesvari, L. Song and S. Jegelka (Eds.),
International Conference on Machine Learning (ICML), 2022.

Online nonsubmodular minimization with delayed costs: From full information to bandit feedback.
T. Lin, A. Pacchiano, Y. Yu, and M. I. Jordan.
In In C. Szepesvari, L. Song and S. Jegelka (Eds.),
International Conference on Machine Learning (ICML), 2022.

Welfare maximization in competitive equilibrium: Reinforcement learning for
Markov exchange economy.
Z. Liu, M. Lu, Z. Wang, M. I. Jordan, and Z. Yang.
In In C. Szepesvari, L. Song and S. Jegelka (Eds.),
International Conference on Machine Learning (ICML), 2022.

Markov persuasion processes and reinforcement learning.
J. Wu, Z. Zhang, Z. Feng, Z. Wang, Z. Yang, M. I. Jordan, and H. Xu.
ACM Conference on Economics and Computation (EC), Boulder, CO, 2022.

On the complexity of approximating multimarginal optimal transport.
T. Lin, N. Ho, Cuturi, M., and M. I. Jordan.
Journal of Machine Learning Research, 23, 143, 2022.

SOUL: An energyefficient unsupervised online learning seizure detection classifier.
A. Chua, M. I. Jordan, and R. Muller.
Journal of Solid State Circuits, 57, 25322544, 2022.

On the efficiency of entropic regularized algorithms for optimal transport.
T. Lin, N. Ho, and M. I. Jordan.
Journal of Machine Learning Research, 23, 142, 2022.

Learning dynamic mechanisms in unknown environments: A reinforcement learning approach.
B. Lyu, Q. Meng, S. Qiu, Z. Wang, Z. Yang, and M. I. Jordan.
arxiv.org/abs/2202.12797, 2022.

Partial identification with noisy covariates: A robust optimization approach.
W. Guo, M. Yin, Y. Wang, and M. I. Jordan.
arxiv.org/abs/2202.10665, 2022.

Geometric methods for sampling, optimisation, inference and adaptive agents.
A. Barp, L. Da Costa, G. França, K. Friston, M. Girolami, M. I. Jordan, and G. A. Pavliotis.
In F. Nielson, A. Srinivasa Rao, and C. R. Rao (Eds.),
Geometry and Statistics, Academic Press, 2022.

Improving generalization via uncertainty driven perturbations.
M. Pagliardini, G. Manunza, M. Jaggi, M. I. Jordan, and T. Chavdarova.
arxiv.org/abs/2202.05737, 2022.

Robust estimation for nonparametric families via generative adversarial networks.
B. Zhu, J. Jiao, and M. I. Jordan.
International Symposium on Information Theory (ISIT), Espoo, Finland, 2022.

Multiresolution deconvolution of spatial transcriptomics data reveals continuous
patterns of inflammation.
R. Lopez, B. Li, H. KerenShaul, P. Boyeau, M. Kedmi, D. Pilzer, A. Jelinski,
E. David, A. Wagner, Y. Addad, M. I. Jordan, I. Amit, and N. Yosef.
Nature Biotechnology, 40, 13601369, 2022.

Transferred Qlearning.
E. Chen, M. I. Jordan, and S. Li.
arxiv.org/abs/2202.04709, 2022.

Online active learning with dynamic marginal gain thresholding.
E. Chen, R. Song, and M. I. Jordan.
arxiv.org/abs/2201.08536, 2022.

Optimal variancereduced stochastic approximation in Banach spaces.
W. Mou, K. Khamaru, M. Wainwright, Bartlett, P., and M. I. Jordan.
arxiv.org/abs/2201.08518, 2022.

Private prediction sets.
A. Angelopoulos, S. Bates, T. Zrnic, and M. I. Jordan.
Harvard Data Science Review, https://doi.org/10.1162/99608f92.16c71dad, 2022.

Adaptivity of stochastic gradient methods for nonconvex optimization .
S. Horváth, L. Lei, P. Richtárik, and M. I. Jordan.
SIAM Journal on the Mathematics of Data Science, 4, 634648, 2022.

Scvitools: A library for deep probabilistic analysis of singlecell omics data.
A. Gayoso, R. Lopez, G. Xing, P. Boyeau, K. Wu, M. Jayasuriya, E. Melhman,
M. Langevin, Y. Liu, J. Samaran, J., G. Misrachi, A. Nazaret, O. Clivio,
C. Xu, T. Ashuach, M. Lotfollahi, V. Svensson, E. Da Veiga Beltrame, C. TalaveraLópez,
L. Pachter, F. Theis, A. Streets, M. I. Jordan, J. Regier, and N. Yosef.
Nature Biotechnology, 40, 163166, 2022.

Ranking and tuning pretrained models: A new paradigm of exploiting model hubs.
K. You, Y. Liu, J. Wang, M. I. Jordan, and M. Long.
Journal of Machine Learning Research, 23, 147, 2022.

Active learning for nonlinear system identification with guarantees.
H. Mania, M. I. Jordan and B. Recht.
Journal of Machine Learning Research, 23, 130, 2022.

Firstorder constrained optimization: Nonsmooth dynamical system viewpoint.
S. Schechtman, D. Tiapkin, E. Moulines, M. I. Jordan, and M. Muehlebach.
18th IFAC Workshop on Control Applications of Optimization, GifsurYvettes, France, 2022.

On the convergence of stochastic extragradient for bilinear games
with restarted iteration averaging.
J. Li, Y. Yu, N. Loizou, G. Gidel, Y. Ma, N. Le Roux, and M. I. Jordan.
Proceedings of the TwentyFifth Conference on Artificial Intelligence and
Statistics (AISTATS), 2022.

Online learning of competitive equilibria in exchange economies.
W. Guo, K. Kandasamy, J. Gonzalez, M. I. Jordan, and I. Stoica.
Proceedings of the TwentyFifth Conference on Artificial Intelligence and
Statistics (AISTATS), 2022.

Fast distributionally robust learning with variance reduced minmax optimization.
Y. Yu, T. Lin, E. Mazumdar, and M. I. Jordan.
Proceedings of the TwentyFifth Conference on Artificial Intelligence and
Statistics (AISTATS), 2022.

On structured filteringclustering: Global error bound and optimal firstorder algorithms.
T. Lin, N. Ho, and M. I. Jordan.
Proceedings of the TwentyFifth Conference on Artificial Intelligence and
Statistics (AISTATS), 2022.

Partial identification with noisy covariates: A robust optimization approach.
W. Guo, M. Yin, Y. Wang, and M. I. Jordan.
1st Conference on Causal Learning and Reasoning (CLeaR)}, 2022.

Identifying systematic variation at the singlecell level by leveraging
lowresolution populationlevel data.
E. Rahmani, M. I. Jordan, and N. Yosef.
26th Annual International Conference on Research in Computational Molecular
Biology (RECOMB), 2022.
2021

Distributionfree, riskcontrolling prediction sets.
A. Angelopoulos, S. Bates, J. Malik, and M. I. Jordan.
Journal of the ACM, 68, 134, 2021.

A controltheoretic perspective on optimal highorder optimization.
T. Lin and M. I. Jordan.
Mathematical Programming, 195, 929975, 2021.

Is temporal difference learning optimal? An instancedependent analysis.
K. Khamaru, A. Pananjady, F. Ruan, M. Wainwright, and M. I. Jordan.
SIAM Journal on the Mathematics of Data Science, 3, https://doi.org/10.1137/20M1331524, 2021.

Assessment of treatment effect estimators for heavytailed data.
N. Tripuraneni, D. Madeka, D. Foster, D. PerraultJoncas, and M. I. Jordan.
arxiv.org/abs/2112.07602, 2021.

Multistage decentralized matching markets: Uncertain preferences and strategic behaviors.
X. Dai and M. I. Jordan.
Journal of Machine Learning Research, 22, 150, 2021.

A Bayesian nonparametric approach to superresolution singlemolecule localization.
M. Gabitto, H. MarieNellie, H. Pakman, A. Pataki, X. Darxacq, and M. I. Jordan.
Annals of Applied Statistics, 15, 17421766, 2021.

How AI fails us.
D. Siddarth, D. Acemoglu, D. Allen, K. Crawford, J. Evans, M. I. Jordan, and G. Weyl.
Edmond J. Safra Center for Ethics, 2021.

The Turing Test is bad for business: Technology should focus on the complementarity game,
not the imitation game.
D. Acemoglu, M. I. Jordan, and E. Glen Weyl.
WIRED Magazine, 2021.

On the selfpenalization phenomenon in feature selection.
M. I. Jordan, K. Liu, and F. Ruan.
arxiv.org/abs/2110.05852, 2021.

Learn then test: Calibrating predictive algorithms to achieve risk control.
A. Angelopoulos, S. Bates, E. Candès, M. I. Jordan, and L. Lei.
arxiv.org/abs/2110.01052, 2021.

Desiderata for representation learning: A causal perspective.
Y. Wang and M. I. Jordan.
arxiv.org/abs/2109.03795, 2021.

Who leads and who follows in strategic classification?
T. Zrnic, E. Mazumdar, S. S. Sastry, and M. I. Jordan.
In M. Ranzato, A. Beygelzimer, P. Liang, J. Wortman Vaughan, and Y. Dauphin (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 34. 2021.

Learning equilibria in matching markets from bandit feedback.
M. Jagadeesan, A. Wei, M. I. Jordan, and J. Steinhardt.
In M. Ranzato, A. Beygelzimer, P. Liang, J. Wortman Vaughan, and Y. Dauphin (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 34. 2021.

On the theory of reinforcement learning with onceperepisode feedback.
N. Chatterji, A. Pacchiano, P. Bartlett, and M. I. Jordan.
In M. Ranzato, A. Beygelzimer, P. Liang, J. Wortman Vaughan, and Y. Dauphin (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 34. 2021.

Robust learning of optimal auctions.
W. Guo, M. Jordan, and E. Zampetakis.
In M. Ranzato, A. Beygelzimer, P. Liang, J. Wortman Vaughan, and Y. Dauphin (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 34. 2021.

Wasserstein flow meets replicator dynamics: A meanfield analysis of
representation learning in actorcritic.
Y. Zhang, S. Chen, Z. Yang, M. I. Jordan, and Z. Wang.
In M. Ranzato, A. Beygelzimer, P. Liang, J. Wortman Vaughan, and Y. Dauphin (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 34. 2021.

On component interactions in twostage recommender systems.
J. Hron, K. Krauth, M. I. Jordan, and N. Kilbertus.
In M. Ranzato, A. Beygelzimer, P. Liang, J. Wortman Vaughan, and Y. Dauphin (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 34. 2021.

Testtime collective prediction.
C. MendlerDünner, W. Guo, S. Bates, and M. I. Jordan.
In M. Ranzato, A. Beygelzimer, P. Liang, J. Wortman Vaughan, and Y. Dauphin (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 34. 2021.

Learning in multistage decentralized matching markets.
X. Dai and M. I. Jordan.
In M. Ranzato, A. Beygelzimer, P. Liang, J. Wortman Vaughan, and Y. Dauphin (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 34. 2021.

Tactical optimism and pessimism for deep reinforcement learning
T. Moskovitz, J. ParkerHolder, A. Pacchiano, M. Arbel, and M. I. Jordan.
In M. Ranzato, A. Beygelzimer, P. Liang, J. Wortman Vaughan, and Y. Dauphin (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 34. 2021.

Optimization on manifolds: A symplectic approach.
G. França, A. Barp, M. Girolami, and M. I. Jordan.
arxiv.org/abs/2107.11231, 2021.

Bandit learning in decentralized matching markets.
L. Liu, F. Ruan, H. Mania, and M. I. Jordan.
Journal of Machine Learning Research, 22, 134, 2021.

On nonconvex optimization for machine learning: Gradients, stochasticity,
and saddle points.
C. Jin, P. Netrapalli, R. Ge, S. Kakade, and M. I. Jordan.
Journal of the ACM, 68, doi.org/10.1145/3418526, 2021.

Elastic hyperparameter tuning on the cloud.
L. Dunlap, K. Kandasamy, U. Misra, R. Liaw, J. Gonzalez, I. Stoica, and M. I. Jordan.
ACM Symposium on Cloud Computing (SoCC), Seattle, WA, 2021.

The stereotyping problem in collaboratively filtered recommender systems.
W. Guo, K. Krauth, M. I. Jordan, and N. Garg.
Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO), 2021.

A variational inequality approach to Bayesian regression games.
W. Guo, M. I. Jordan, and T. Lin.
Proceedings of the 60th IEEE Conference on Decision and Control (CDC), Austin, TX, 2021.

On the stability of nonlinear receding horizon control: A geometric perspective.
T. Westenbroek, M. Simchowitz, M. I. Jordan, and S. S. Sastry.
Proceedings of the 60th IEEE Conference on Decision and Control (CDC), Austin, TX, 2021.

Data sharing markets.
M. Rasouli and M. I. Jordan.
arxiv.org/abs/2107.08630, 2021.

Instanceoptimality in optimal value estimation: Adaptivity via variancereduced Qlearning.
K. Khamaru, E. Xia, M. Wainwright, and M. I. Jordan.
arxiv.org/abs/2106.14352, 2021.

Clusterandconquer: A framework for timeseries forecasting.
R. Pathak, R. Sen, N. Rao, N. B. Erichson, M. I. Jordan, and I. Dhillon.
arxiv.org/abs/2110.14011, 2021.

Taming nonconvexity in kernel feature selection—Favorable properties of the Laplace kernel.
F. Ruan, K. Liu, and M. I. Jordan.
arxiv.org/abs/2106.09387, 2021.

Parallelizing contextual linear bandits.
J. Chan, A. Pacchiano, N. Tripuraneni, Y. Song, P. Bartlett, and M. I. Jordan.
arxiv.org/abs/2105.10590, 2021.

Stochastic approximation for online tensorial independent component analysis.
J. Li and M. I. Jordan.
Proceedings of the Conference on Learning Theory (COLT), Boulder, CO, 2021.

Reconstructing unobserved cellular states from paired singlecell lineage tracing
and transcriptomics data.
K. Ouardini, R. Lopez, M. G. Jones, S. Prillo, R. Zhang, M. I. Jordan, and N. Yosef.
www.biorxiv.org/content/10.1101/2021.05.28.446021v1, 2021.

Provable metalearning of linear representations.
N. Tripuraneni, C. Jin, and M. I. Jordan.
In M. Meila and T. Zhang (Eds.), International Conference on Machine Learning (ICML), 2021.

Resource allocation in multiarmed bandit exploration: Overcoming nonlinear
scaling with adaptive parallelism.
B. Thananjeyan, K. Kandasamy, I. Stoica, M. I. Jordan, K. Goldberg, and J. Gonzalez.
In M. Meila and T. Zhang (Eds.), International Conference on Machine Learning (ICML), 2021.

Representation matters: Assessing the importance of subgroup allocations in training data.
E. Rolf, T. Worledge, B. Recht, and M. I. Jordan.
In M. Meila and T. Zhang (Eds.), International Conference on Machine Learning (ICML), 2021.

Variational refinement for importance sampling using the forward KullbackLeibler divergence.
G. Jerfel, S. Wang, C. Fannjiang, K. Heller, Y. Ma, and M. I. Jordan
In C. de Campos and M. Maathuis (Eds.), Uncertainty in Artificial Intelligence (UAI),
Proceedings of the ThirtySeventh Conference, 2021.

A Lyapunov analysis of momentum methods in optimization.
A. Wilson, B. Recht and M. I. Jordan.
Journal of Machine Learning Research, 22, 134, 2021.

Is there an analog of Nesterov acceleration for MCMC?.
Y.A. Ma, N. Chatterji, X. Cheng, N. Flammarion, P. Bartlett, and M. I. Jordan.
Bernoulli, 27, 19421992, 2021.

Learning strategies in decentralized matching markets under uncertain preferences.
X. Dai and M. I. Jordan.
Journal of Machine Learning Research, 22, 150, 2021.

Optimization with momentum: Dynamical, controltheoretic, and symplectic perspectives.
M. Muehlebach and M. I. Jordan.
Journal of Machine Learning Research, 22, 150, 2021.

PAC best arm identification under a deadline.
B. Thananjeyan, K. Kandasamy, I. Stoica, M. I. Jordan, K. Goldberg, and J. Gonzalez.
arxiv.org/abs/2106.03221, 2021.

Deep generative models for detecting differential expression in single cells.
P. Boyeau, R. Lopez, J. Regier, A. Gayoso, M. I. Jordan, and N. Yosef.
www.biorxiv.org/content/10.1101/794289v1, 2021.

On dissipative symplectic integration with applications to gradientbased optimization.
G. França, M. I. Jordan, and R. Vidal.
Journal of Statistical Mechanics: Theory and Experiment, (2021) 043402.

Generalized momentumbased methods: A Hamiltonian perspective.
J. Diakonikolas and M. I. Jordan.
SIAM Journal on Optimization, 31, 915944, 2021.

Understanding the acceleration phenomenon via highresolution differential equations.
B. Shi, S. Du, M. I. Jordan, and W. Su.
Mathematical Programming, doi.org/10.1007/s10107021016818, 2021.

Interleaving computational and inferential thinking: Data Science
for undergraduates at Berkeley.
A. Adhikari, J. DeNero, and M. I. Jordan.
Harvard Data Science Review, doi.org/10.1162/99608f92.cb0fa8d2, 2021.

Asynchronous online testing of multiple hypotheses.
T. Zrnic, A. Ramdas, and M. I. Jordan.
Journal of Machine Learning Research, 22, 139, 2021.

Highorder Langevin diffusion yields an accelerated MCMC algorithm.
W. Mou, Y.A. Ma, M. Wainwright, P. Bartlett, and M. I. Jordan.
Journal of Machine Learning Research, 22, 141, 2021.

Unsupervised online learning classifier for seizure detection.
A. Chua, M. I. Jordan, and R. Muller.
2021 Symposium on VLSI Circuits Kyoto, Japan, 2021.

Efficient methods for structured nonconvexnonconcave minmax optimization.
J. Diakonakolis, C. Daskalakis, and M. I. Jordan.
Proceedings of the TwentyFourth Conference on Artificial Intelligence and
Statistics (AISTATS), 2021.

On projection robust optimal transport: Sample complexity and model misspecification.
T. Lin, Z. Zheng, E. Chen, M. Cuturi, and M. I. Jordan.
Proceedings of the TwentyFourth Conference on Artificial Intelligence and
Statistics (AISTATS), 2021.

Uncertainty sets for image classifiers using conformal prediction.
A. Angelopoulos, S. Bates, J. Malik, and M. I. Jordan.
International Conference on Learning Representations (ICLR), 2021.

Learning from eXtreme bandit feedback.
R. Lopez, I. Dhillon, and M. I. Jordan.
ThirtyFifth AAAI Conference on Artificial Intelligence (AAAI21), 2021.
[Best Paper Award Honorable Mention].

Robustness guarantees for mode estimation with an application to bandits.
A. Pacchiano, H. Jiang, and M. I. Jordan.
ThirtyFifth AAAI Conference on Artificial Intelligence (AAAI21), 2021.

Probabilistic harmonization and annotation of singlecell transcriptomics data with
deep generative models.
X. Xu, R. Lopez, E. Mehlman, J. Regier, M. I. Jordan, and N. Yosef.
Molecular Systems Biology, 17, e9620, 2021.
2020

On the adaptivity of stochastic gradientbased optimization.
L. Lei and M. I. Jordan.
SIAM Journal on Optimization, 30, 14731500, 2020.

Fundamental limits of detection in the spiked Wigner model.
A. El Alaoui, F. Krzakala, and M. I. Jordan.
Annals of Statistics, 48, 863885, 2020.

On identifying and mitigating bias in the estimation of the COVID19 case fatality rate.
A. Angelopoulos, R. Pathak, R. Varma, and M. I. Jordan.
Harvard Data Science Review, Special Issue 1, 2020.

Optimal rates and tradeoffs in multiple testing.
M. Rabinovich, A. Ramdas, M. I. Jordan, and M. Wainwright.
Statistica Sinica, 30, 741762, 2020.

Functionspecific mixing times and concentration away from equilibrium.
M. Rabinovich, A. Ramdas, M. I. Jordan, and M. Wainwright.
Bayesian Analysis, 15, 505532, 2020.

Greedy Attack and Gumbel Attack: Generating adversarial examples for
discrete data.
P. Yang, J. Chen, C.J. Hsieh, J.L. Wang, and M. I. Jordan.
Journal of Machine Learning Research, 21, 136, 2002.

Optimal mean estimation without a variance.
Y. Cherapanamjeri, N. Tripuraneni, P. Bartlett, and M. I. Jordan.
arxiv.org/abs/2011.12433, 2020.

Online learning demands in maxmin fairness.
K. Kandasamy, G.E. Sela, J. Gonzalez, M. I. Jordan, and I. Stoica.
arxiv.org/abs/2012.08648, 2020.

Manifold learning via manifold deflation.
D. Ting and M. I. Jordan.
arxiv.org/abs/2007.03315, 2020.

Do offline metrics predict online performance in recommender systems?.
K. Krauth, S. Dean, A. Zhao, W. Guo, M. Curmei, B. Recht, M. I. Jordan.
arxiv.org/abs/2011.07931, 2020.

Optimal robust linear regression in nearly linear time.
Y. Cherapanamjeri, E. Aras, N. Tripuraneni, M. I. Jordan, N. Flammarion, and P. Bartlett.
arxiv.org/abs/2007.08137, 2020.

Bridging exploration and general function approximation in reinforcement learning:
Provably efficient kernel and neural value iterations.
Z. Wang, C. Jin, Z. Yang, M. Wang, and M. I. Jordan.
arxiv.org/abs/2011.04622, 2020.

Robust optimization for fairness with noisy protected groups.
S. Wang, W. Guo, H. Narasimhan, A. Cotter, M. Gupta, and M. I. Jordan.
In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H.T. Lin (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 33. 2020.

On the theory of transfer learning: The importance of task diversity.
N. Tripuraneni, M. I. Jordan, and C. Jin.
In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H.T. Lin (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 33. 2020.

Decisionmaking with autoencoding variational Bayes.
R. Lopez, P. Boyeau, N. Yosef, M. I. Jordan, and J. Regier.
In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H.T. Lin (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 33. 2020.

Projection robust Wasserstein distance and Riemannian optimization.
T. Lin, C. Fan, N. Ho, M. Cuturi, and M. I. Jordan.
In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H.T. Lin (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 33. 2020.

Fixedsupport Wasserstein barycenter: Computational hardness
and efficient algorithms.
T. Lin, N. Ho, X. Chen, M. Cuturi, and M. I. Jordan.
In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H.T. Lin (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 33. 2020.

Provably efficient reinforcement learning with kernel and neural function approximation.
Z. Wang, C. Jin, Z. Yang, M. Wang, and M. I. Jordan.
In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H.T. Lin (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 33. 2020.

Transferable calibration with lower bias and variance in domain adaptation.
X. Wang, M. Long, J. Wang, and M. I. Jordan.
In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H.T. Lin (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 33. 2020.

Posterior distribution for the number of clusters in Dirichlet process mixture models.
C.Y. Yang, E. Xia, N. Ho, and M. I. Jordan.
arxiv.org/abs/1905.09959, 2020.

A higherorder Swiss army infinitesimal jackknife.
R. Giordano, M. I. Jordan, and T. Broderick.
arxiv.org/abs/1907.12116, 2020.

Covariance estimation with nonnegative partial correlations.
J. A. Soloff, A. Guntuboyina, and M. I. Jordan.
arxiv.org/abs/2007.15252, 2020.

Finding equilibrium in multiagent games with payoff uncertainty.
W. Guo, M. Curmei, S. Wang, B. Recht, and M. I. Jordan.
arxiv.org/abs/2007.05647, 2020.

Highconfidence sets for trajectories of stochastic timevarying nonlinear systems.
E. Mazumdar, T. Westenbroek, M. I. Jordan, and S. Sastry.
Proceedings of the 59th IEEE Conference on Decision and Control (CDC), Jeju Island, Korea,
2020.

Singularity, misspecification, and the convergence rate of EM.
R. Dwivedi, N. Ho, K. Khamaru, M. Wainwright, M. I. Jordan, and B. Yu.
Annals of Statistics, 48, 31613182, 2020.

On Thompson sampling with Langevin algorithms.
E. Mazumdar, A. Pacchiano, Y.A. Ma, P. Bartlett, and M. I. Jordan.
In H. Daumé III and A. Singh (Eds.),
International Conference on Machine Learning (ICML), 2020.

Continuoustime lower bounds for gradientbased algorithms.
M. Muehlebach and M. I. Jordan.
In H. Daumé III and A. Singh (Eds.),
International Conference on Machine Learning (ICML), 2020.

On gradient descent ascent for nonconvexconcave minimax problems.
T. Lin, C. Jin, and M. I. Jordan.
In H. Daumé III and A. Singh (Eds.),
International Conference on Machine Learning (ICML), 2020.

Learning to score behaviors for guided policy optimization.
A. Pacchiano, J. ParkerHolder, Y. Tang, K. Choromanski, A. Choromanska, and M. I. Jordan.
In H. Daumé III and A. Singh (Eds.),
International Conference on Machine Learning (ICML), 2020.

Finitetime lastiterate convergence for multiagent learning in games.
T. Lin, Z. Zhou, P. Mertikopoulos, and M. I. Jordan.
In H. Daumé III and A. Singh (Eds.),
International Conference on Machine Learning (ICML), 2020.

What is local optimality in nonconvexnonconcave minimax optimization?
C. Jin, P. Netrapalli, and M. I. Jordan.
In H. Daumé III and A. Singh (Eds.),
International Conference on Machine Learning (ICML), 2020.

Accelerated message passing for entropyregularized MAP inference.
J. Lee, A. Pacchiano, P. Bartlett, and M. I. Jordan.
In H. Daumé III and A. Singh (Eds.),
International Conference on Machine Learning (ICML), 2020.

Stochastic gradient and Langevin processes.
X. Cheng, Yin, D., P. Bartlett, and M. I. Jordan.
In H. Daumé III and A. Singh (Eds.),
International Conference on Machine Learning (ICML), 2020.

Provably efficient reinforcement learning with linear function approximation.
C. Jin, Z. Yang, Z. Wang, and M. I. Jordan.
Proceedings of the Conference on Learning Theory (COLT), Graz, Austria, 2020.

On linear stochastic approximation: Finegrained PolyakRuppert and nonasymptotic concentration.
W. Mou, J. Li, M. Wainwright, P. Bartlett, and M. I. Jordan.
Proceedings of the Conference on Learning Theory (COLT), Graz, Austria, 2020.

Nearoptimal algorithms for minimax optimization.
T. Lin, C. Jin, and M. I. Jordan.
Proceedings of the Conference on Learning Theory (COLT), Graz, Austria, 2020.

Lower bounds in multiple testing: A framework based on derandomized proxies.
M. Rabinovich, M. I. Jordan, and M. Wainwright.
arxiv.org/abs/2005.03725, 2020.

Detecting zeroinflated genes in singlecell transcriptomics data.
O. Clivio, R. Lopez, J. Regier, A. Gayoso, M. I. Jordan, and N. Yosef.
biorxiv.org/content/10.1101/794875v3, 2020.

Policygradient algorithms have no guarantees of convergence in continuous
action and state multiagent settings.
E. Mazumdar, L. Ratliff, M. I. Jordan, and S. S. Sastry.
International Conference on Autonomous Agents and MultiAgent Systems (AAMAS),
Auckland, New Zealand, 2020.

Improved sample complexity for stochastic compositional variance reduced gradient.
T. Lin, C. Fan, M. Wang, and M. I. Jordan.
American Control Conference (ACC), Denver, CO, 2020.

HopSkipJumpAttack: Queryefficient decisionbased adversarial attack.
J. Chen, M. I. Jordan, and M. Wainwright.
41st IEEE Symposium on Security and Privacy (SP), San Francisco, CA, 2020.

Unsupervised online learning for longterm high sensitivity seizure detection.
A. Chua, M. I. Jordan, and R. Muller.
42nd Annual International Conference of the IEEE Engineering in Medicine and
Biology Society (EMBC) Montreal, Canada, 2020.

Competing bandits in matching markets.
L. Liu, H. Mania, and M. I. Jordan.
In R. Calandra and S. Chiappa (Eds.),
Proceedings of the TwentyThird Conference on Artificial Intelligence and
Statistics (AISTATS), Palermo, Italy, 2020.

Langevin Monte Carlo without smoothness.
N. Chatterji, J. Diakonikolas, M. I. Jordan, and P. Bartlett.
In R. Calandra and S. Chiappa (Eds.),
Proceedings of the TwentyThird Conference on Artificial Intelligence and
Statistics (AISTATS), Palermo, Italy, 2020.

The power of batching in multiple hypothesis testing.
T. Zrnic, D. Jiang, Ramdas, A., and M. I. Jordan.
In R. Calandra and S. Chiappa (Eds.),
Proceedings of the TwentyThird Conference on Artificial Intelligence and
Statistics (AISTATS), Palermo, Italy, 2020.

Sharp analysis of expectationmaximization for weakly identifiable models.
R. Dwivedi, N. Ho, K. Khamaru, M. Wainwright, M. I. Jordan, and B. Yu.
In R. Calandra and S. Chiappa (Eds.),
Proceedings of the TwentyThird Conference on Artificial Intelligence and
Statistics (AISTATS), Palermo, Italy, 2020.

Fast algorithms for computational optimal transport and Wasserstein barycenter.
W. Guo, N. Ho, and M. I. Jordan.
In R. Calandra and S. Chiappa (Eds.),
Proceedings of the TwentyThird Conference on Artificial Intelligence and
Statistics (AISTATS), Palermo, Italy, 2020.

Convergence rates of smooth message passing with rounding in entropyregularized
MAP inference.
J. Lee, A. Pacchiano, and M. I. Jordan.
In R. Calandra and S. Chiappa (Eds.),
Proceedings of the TwentyThird Conference on Artificial Intelligence and
Statistics (AISTATS), Palermo, Italy, 2020.

Postestimation smoothing: A simple baseline for learning with side information.
E. Rolf, M. I. Jordan, and B. Recht.
In R. Calandra and S. Chiappa (Eds.),
Proceedings of the TwentyThird Conference on Artificial Intelligence and
Statistics (AISTATS), Palermo, Italy, 2020.

Deep generative models for detecting differential expression in single cells.
O. Clivio, R. Lopez, J. Regier, A. Gayoso, M. I. Jordan, and N. Yosef.
biorxiv.org/content/10.1101/794289v1, 2020.

MLLOO: Detecting adversarial examples with feature attribution.
P. Yang, J. Chen, C.J. Hsieh, J.L. Wang, and M. I. Jordan.
ThirtyFourth AAAI Conference on Artificial Intelligence (AAAI20), 2020.

Costeffective incentive allocation via structured counterfactual inference.
R. Lopez, C. Li, X. Yan, J. Xiong, M. I. Jordan, Y. Qi, and L. Song.
ThirtyFourth AAAI Conference on Artificial Intelligence (AAAI20), 2020.

LSTree: Model interpretation when the data are linguistic.
J. Chen and M. I. Jordan.
ThirtyFourth AAAI Conference on Artificial Intelligence (AAAI20), 2020.

Variance reduction with sparse gradients.
M. Elibol, L. Lei, and M. I. Jordan.
International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia, 2020.
2019

A unified treatment of multiple testing with prior knowledge.
A. Ramdas, R. Foygel Barber, M. Wainwright, and M. I. Jordan.
Annals of Statistics, 47, 27902821, 2019.

Decoding from pooled data: Sharp informationtheoretic bounds.
A. El Alaoui, A. Ramdas, F. Krzakala, L. Zdeborova, and M. I. Jordan.
SIAM Journal on the Mathematics of Data Science, 1, 161188, 2019.

Artificial intelligence: The revolution hasn't happened yet.
M. I. Jordan.
Harvard Data Science Review, 1, 2019. [With commentary. Originally published in Medium].

Dr. AI or: How I learned to stop worrying and love economics.
M. I. Jordan.
Harvard Data Science Review, 1, 2019. [Response to commentary].

Sampling can be faster than optimization.
Y.A. Ma, Y. Chen, C. Jin, N. Flammarion, and M. I. Jordan.
Proceedings of the National Academy of Sciences, https://doi.org/10.1073/pnas.1820003116, 2019.

Firstorder methods almost always avoid strict saddlepoints.
J. Lee, I. Panageas, G. Piliouras, M. Simchowitz, M. I. Jordan, and B. Recht.
Mathematical Programming, doi.org/10.1007/s10107019013743, 2019.

A sequential algorithm for false discovery rate control on directed acyclic graphs.
A. Ramdas, J. Chen, M. Wainwright, and M. I. Jordan.
Biometrika, 106, 6986.

Transferable representation learning with deep adaptation networks.
M. Long, Cao, Z., Cao, Y., J. Wang, and M. I. Jordan.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 41, 30713085, 2019.

Decoding from pooled data: Phase transitions of message passing.
A. El Alaoui, A. Ramdas, F. Krzakala, L. Zdeborova, and M. I. Jordan.
IEEE Transactions on Information Theory, 65, 572585, 2019.

Sampling for Bayesian mixture models: MCMC with polynomialtime mixing.
W. Mou, Y.A. Ma, M. Wainwright, P. Bartlett, and M. I. Jordan.
arxiv.org/abs/1912.05153, 2019.

Towards understanding the transferability of deep representations.
H. Liu, M. Long, J. Wang, and M. I. Jordan.
arxiv.org/abs/1909.12031, 2019.

How does learning rate decay help modern neural networks?.
K. You, M. Long, J. Wang, and M. I. Jordan.
arxiv.org/abs/1908.01878, 2019.

Convergence rates for Gaussian mixtures of experts.
N. Ho, C.Y. Yang, and M. I. Jordan.
arxiv.org/abs/1907.04377, 2019.

Quantitative W1 convergence of Langevinlike stochastic processes with
nonconvex potential statedependent noise.
X. Cheng, Yin, D., P. Bartlett, and M. I. Jordan.
arxiv.org/abs/1907.03215, 2019.

Wasserstein reinforcement learning.
A. Pacchiano, J. ParkerHolder, Y. Tang, A. Choromanska, K. Choromanski, and M. I. Jordan.
arxiv.org/abs/1906.04349, 2019.

On the acceleration of the Sinkhorn and Greenkhorn algorithms for optimal transport.
T. Lin, N. Ho, and M. I. Jordan.
arxiv.org/abs/1906.01437, 2019.

Posterior distribution for the number of clusters in Dirichlet
process mixture models.
CY. Yang, N. Ho, and M. I. Jordan.
arxiv.org/abs/1905.09959, 2019.

A joint model of unpaired data from scRNAseq and spatial transcriptomics for imputing
missing gene expression measurements.
R. Lopez, A. Nazaret, M. Langevin, J. Samaran, J. Regier, M. I. Jordan, and N. Yosef.
arxiv.org/abs/1905.02269, 2019.

Stochastic gradient descent escapes saddle points efficiently.
C. Jin, R. Ge, P. Netrapalli, S. Kakade, and M. I. Jordan.
arxiv.org/abs/1902.04811, 2019.

A short note on concentration inequalities for random vectors with subGaussian norm.
arxiv.org/abs/1902.03736, 2019.

SysML: The new frontier of machine learning systems.
A. Ratner, et al.
arxiv.org/abs/1904.03257, 2019.

Global error bounds and linear convergence for gradientbased algorithms for
trend filtering and l1convex clustering.
N. Ho, T. Lin, and M. I. Jordan.
arxiv.org/abs/1904.07462, 2019.

Acceleration via symplectic discretization of highresolution differential equations.
B. Shi, S. Du, W. Su, and M. I. Jordan.
In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch&eacutBuc, E. Fox, and R. Garnett (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 32, 2019.

Transferable normalization: Towards improving transferability of deep neural networks.
X. Wang, Y. Jin, M. Long, J. Wang, and M. I. Jordan.
In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch&eacutBuc, E. Fox, and R. Garnett (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 32, 2019.

Quantitative central limit theorems for discrete stochastic processes.
X. Cheng, P. Bartlett, and M. I. Jordan.
arxiv.org/abs/1902.00832, 2019.

Challenges with EM in application to weakly identifiable mixture models.
R. Dwivedi, N. Ho, K. Khamaru, M. Wainwright, M. I. Jordan, and B. Yu.
arxiv.org/abs/1902.00194, 2019.

Minmax optimization: Stable limit points of gradient descent ascent are locally optimal.
C. Jin, P. Netrapalli, and M. I. Jordan.
arxiv.org/abs/1902.00618, 2019.

On finding local Nash equilibria (and only local Nash equilibria) in zerosum games.
E. Mazumdar, M. I. Jordan, and S. S. Sastry.
arxiv.org/abs/1901.00838, 2019.

A dynamical systems perspective on Nesterov acceleration.
M. Muehlebach and M. I. Jordan.
In K. Chaudhuri and R. Salakhutdinov (Eds.),
International Conference on Machine Learning (ICML), 2019.

On efficient optimal transport: An analysis of greedy and accelerated
mirror descent algorithms.
T. Lin, N. Ho, and M. I. Jordan.
In K. Chaudhuri and R. Salakhutdinov (Eds.),
International Conference on Machine Learning (ICML), 2019.

Bridging theory and algorithm for domain adaptation.
Y. Zhang, T. Liu, M. Long, and M. I. Jordan.
In K. Chaudhuri and R. Salakhutdinov (Eds.),
International Conference on Machine Learning (ICML), 2019.

Theoretically principled tradeoff between robustness and accuracy.
H. Zhang, Y. Yu, J. Jiao, E. Xing, L. El Ghaoui, and M. I. Jordan.
In K. Chaudhuri and R. Salakhutdinov (Eds.),
International Conference on Machine Learning (ICML), 2019.

Towards accurate model selection in deep unsupervised domain adaptation.
K. You, X. Wang, M. Long, and M. I. Jordan.
In K. Chaudhuri and R. Salakhutdinov (Eds.),
International Conference on Machine Learning (ICML), 2019.

RaoBlackwellized stochastic gradients for discrete distributions.
R. Liu, J. Regier, N. Tripuraneni, M. I. Jordan, and J. McAuliffe.
In K. Chaudhuri and R. Salakhutdinov (Eds.),
International Conference on Machine Learning (ICML), 2019.

Transferable adversarial training: A general approach to adapting deep classifiers.
H. Liu, M. Long, J. Wang, and M. I. Jordan.
In K. Chaudhuri and R. Salakhutdinov (Eds.),
International Conference on Machine Learning (ICML), 2019.

A Swiss army infinitesimal jackknife.
R. Giordano, W. Stephenson, R. Liu, M. I. Jordan, and T. Broderick.
In K. Chaudhuri and M. Sugiyama (Eds.),
Proceedings of the TwentySecond Conference on Artificial Intelligence and
Statistics (AISTATS), Okinawa, Japan, 2019. [Notable Paper Award].

Probabilistic multilevel clustering via composite transportation distance.
N. Ho, V. Huynh, D. Phung, and M. I. Jordan.
In K. Chaudhuri and M. Sugiyama (Eds.),
Proceedings of the TwentySecond Conference on Artificial Intelligence and
Statistics (AISTATS), Okinawa, Japan, 2019.

LShapley and CShapley: Efficient model interpretation for structured data.
J. Chen, L. Song, M. Wainwright, and M. I. Jordan.
International Conference on Learning Representations (ICLR) , New Orleans, LA.
2019.
2018

Dynamical, symplectic and stochastic perspectives on gradientbased optimization.
M. I. Jordan.
Proceedings of the International Congress of Mathematicians, 1, 523550, 2018.

Bayesian inference for a generative model of transcriptome profiles from
singlecell RNA sequencing.
R. Lopez, J. Regier, M. Cole, M. I. Jordan, and N. Yosef.
Nature Methods, 15, 10531058, 2018.

Communicationefficient distributed statistical inference.
M. I. Jordan, J. Lee, and Y. Yang.
Journal of the American Statistical Association, 114, 668681, 2018.

On kernel methods for covariates that are rankings.
H. Mania, A. Ramdas, M. Wainwright, M. I. Jordan, and B. Recht.
Electronic Journal of Statistics, 12, 25372577, 2018.

Saturating splines and feature selection.
Boyd, N., Hastie, T., Boyd, S., Recht, B., and M. I. Jordan.
Journal of Machine Learning Research, 18, 132, 2018.

Covariances, robustness, and variational Bayes.
R. Giordano, T. Broderick, and M. I. Jordan.
Journal of Machine Learning Research, 19, 149, 2018.

CoCoA: A general framework for communicationefficient distributed optimization.
V. Smith, S. Forte, C. Ma, M. Takac, M. I. Jordan, and M. Jaggi.
Journal of Machine Learning Research, 18, 149, 2018.

Posteriors, conjugacy, and exponential families for completely random measures.
T. Broderick, A. Wilson, and M. I. Jordan.
Bernoulli, 24, 31813221, 2018.

Latent marked Poisson process with applications to object segmentation.
S. Ghanta, J. Dy, D. Niu, and M. I. Jordan.
Bayesian Analysis, 13, 85113, 2018.

Ray: A distributed framework for emerging AI applications.
P. Moritz, R. Nishihara, S. Wang, A. Tumanov, R. Liaw, E. Liang,
W. Paul, M. I. Jordan, and I. Stoica.
13th USENIX Symposium on Operating Systems Design and Implementation (OSDI),
Carlsbad, CA, 2018.

A deep generative model for semisupervised classification with noisy labels.
M. Langevin, E. Mehlman, J. Regier, R. Lopez, M. I. Jordan, and N. Yosef.
arxiv.org/abs/1809.05957, 2018.

Sharp convergence rates for Langevin dynamics in the nonconvex setting.
X. Cheng, N. Chatterji, Y. AbbasiYadkori, P. Bartlett, and M. I. Jordan.
arxiv.org/abs/1805.01648, 2018.

Learning without mixing: Towards a sharp analysis of linear system identification.
M. Simchowitz, H. Mania, S. Tu, M. I. Jordan, and B. Recht.
Proceedings of the Conference on Learning Theory (COLT),
Stockholm, Sweden, 2018.

Underdamped Langevin MCMC: A nonasymptotic analysis.
X. Cheng, N. Chatterji, P. Bartlett, and M. I. Jordan.
Proceedings of the Conference on Learning Theory (COLT),
Stockholm, Sweden, 2018.

Averaging stochastic gradient descent on Riemannian manifolds.
N. Tripuraneni, N. Flammarion, F. Bach, and M. I. Jordan.
Proceedings of the Conference on Learning Theory (COLT),
Stockholm, Sweden, 2018.

Accelerated gradient descent escapes saddle points faster than gradient descent.
C. Jin, P. Netrapalli, and M. I. Jordan.
Proceedings of the Conference on Learning Theory (COLT),
Stockholm, Sweden, 2018.

Detection limits in the highdimensional spiked rectangular model.
A. El Alaoui and M. I. Jordan.
Proceedings of the Conference on Learning Theory (COLT),
Stockholm, Sweden, 2018.

Partial transfer learning with selective adversarial networks.
Z. Cao, M. Long, J. Wang, and M. I. Jordan.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
Salt Lake City, UT, 2018.

SAFFRON: An adaptive algorithm for online control of the false discovery rate.
A. Ramdas, T. Zrnic, M. Wainwright, and M. I. Jordan.
Proceedings of the 35th International Conference on Machine
Learning (ICML), Stockholm, Sweden, 2018.

Learning to explain: An informationtheoretic perspective on model interpretation.
J. Chen, L. Song, M. Wainwright, and M. I. Jordan.
Proceedings of the 35th International Conference on Machine
Learning (ICML), Stockholm, Sweden, 2018.

Ray RLlib: A framework for distributed reinforcement learning.
E. Liang, R. Liaw, P. Moritz, R. Nishihara, R. Fox, K. Goldberg,
J. Gonzalez, M. I. Jordan, and I. Stoica.
Proceedings of the 35th International Conference on Machine
Learning (ICML), Stockholm, Sweden, 2018.

On the theory of variance reduction for stochastic gradient Monte Carlo.
N. S. Chatterji, N. Flammarion, Y.A. Ma, P. L. Bartlett, and M. I. Jordan.
Proceedings of the 35th International Conference on Machine
Learning (ICML), Stockholm, Sweden, 2018.

Flexible primitives for distributed deep learning in Ray.
Y. Bulatov, R. Nishihara, P. Moritz, M. Elibol, M. I. Jordan, and I. Stoica. (2018).
Systems and Machine Learning Conference, Palo Alto, CA, 2018.

On symplectic optimization.
M. Betancourt, M. I. Jordan, and A. Wilson.
arxiv.org/abs/1802.03653, 2018.

Minimizing nonconvex population risk from rough empirical risk.
C. Jin, L. Liu, Ge, R., and M. I. Jordan.
arxiv.org/abs/1803.09357, 2018.

On nonlinear dimensionality reduction, linear smoothing and autoencoding.
D. Ting and M. I. Jordan.
arxiv.org/abs/1803.02432, 2018.

Modelbased value estimation for efficient modelfree reinforcement learning.
V. Feinberg, A. Wan, I. Stoica, M. I. Jordan, J. Gonzalez, and S. Levine.
arxiv.org/abs/1803.00101, 2018.

Flexible primitives for distributed deep learning in Ray.
Y. Bulitov, P. Moritz, R. Nishihara, M. I. Jordan, and I. Stoica.
Systems and Machine Learning Conference (SysML), Stanford, CA, 2018.

Is Qlearning provably efficient?.
C. Jin, Z. AllenZhu, S. Bubeck, and M. I. Jordan.
In S. Vishwanathan, H. Wallach, Larochelle, H., Grauman, K., and CesaBianchi, N. (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 31, 2018.

Stochastic cubic regularization for fast nonconvex optimization.
N. Tripuraneni, M. Stern, C. Jin, J. Regier, and M. I. Jordan.
Advances in Neural Information Processing Systems (NeurIPS) 31, 2018.

On the local minima of the empirical risk.
C. Jin, L. Liu, R. Ge, and M. I. Jordan.
In S. Vishwanathan, H. Wallach, Larochelle, H., Grauman, K., and CesaBianchi, N. (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 31, 2018.

GenOja: Simple and efficient algorithm for streaming generalized
eigenvector computation.
K. Bhatia, A. Pacchiano, N. Flammarion, P. Bartlett, P. and M. I. Jordan.
In S. Vishwanathan, H. Wallach, Larochelle, H., Grauman, K., and CesaBianchi, N. (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 31, 2018.

Theoretical guarantees for EM under misspecified Gaussian mixture models.
R. Dwivedi, N. Ho, K. Khamaru, M. Wainwright, and M. I. Jordan.
In S. Vishwanathan, H. Wallach, Larochelle, H., Grauman, K., and CesaBianchi, N. (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 31, 2018.

Generalized zeroshot learning with deep calibration network.
S. Liu, M. Long, J. Wang, and M. I. Jordan.
In S. Vishwanathan, H. Wallach, Larochelle, H., Grauman, K., and CesaBianchi, N. (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 31, 2018.

Conditional adversarial domain adaptation.
M. Long, Z. Cao, J. Wang, and M. I. Jordan.
In S. Vishwanathan, H. Wallach, Larochelle, H., Grauman, K., and CesaBianchi, N. (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 31, 2018.

Information constraints on autoencoding variational Bayes.
R. Lopez, J. Regier, M. I. Jordan, and N. Yosef.
In S. Vishwanathan, H. Wallach, Larochelle, H., Grauman, K., and CesaBianchi, N. (Eds),
Advances in Neural Information Processing Systems (NeurIPS) 31, 2018.
2017

Minimax optimal procedures for locally private estimation.
J. Duchi, M. I. Jordan, and M. Wainwright.
Journal of the American Statistical Association, 113, 182201, 2017.

Perturbed iterate analysis for asynchronous stochastic optimization.
H. Mania, X. Pan, D. Papailiopoulos, B. Recht, K. Ramchandran, and M. I. Jordan.
SIAM Journal on Optimization, 27, 22022229, 2017.

Finite size corrections and likelihood ratio fluctuations in the spiked Wigner model.
A. El Alaoui, F. Krzakala, M. I. Jordan.
arxiv.org/abs/1710.02903, 2017.

Measuring cluster stability for Bayesian nonparametrics using the linear
bootstrap.
R. Giordano, R. Liu, N. Varoquaux, M. I. Jordan, and T. Broderick.
arxiv.org/abs/1712.01435, 2017.

A deep generative model for singlecell RNA sequencing with application
to detecting differentially expressed genes.
R. Lopez, J. Regier, M. Cole, M. I. Jordan, and N. Yosef.
arxiv.org/abs/1710.05086, 2017.

Partial transfer learning with selective adversarial networks.
Z. Cao, M. Long, J. Wang, and M. I. Jordan.
arxiv.org/abs/1707.07901, 2017.

A Berkeley view of systems challenges for AI.
I. Stoica, D. Song, R. A. Popa, D. Patterson, M. Mahoney, R Katz,
A. Joseph, M. I. Jordan, J. M. Hellerstein, J. Gonzalez, K Goldberg,
A. Ghodsi, D. Culler, and P. Abbeel.
arxiv.org/abs/1712.05855, 2017.

Conditional adversarial domain adaptation.
M. Long, Z. Cao, J. Wang, and M. I. Jordan.
arxiv.org/abs/1705.10667, 2017.

Mining massive amounts of genomic data: A semiparametric topic modeling approach.
E. Fang, MD. Li, M. I. Jordan, and H. Liu.
Journal of the American Statistical Association, 112, 921932, 2017.

Domain adaptation with randomized multilinear adversarial networks.
M. Long, Z. Cao, J. Wang, and M. I. Jordan.
arxiv.org/abs/1705.10667, 2017.

Realtime machine learning: The missing pieces.
R. Nishihara, P. Moritz, S. Wang, A. Tumanov, W. Paul, J. SchleierSmith,
R. Liaw, M. I. Jordan and I. Stoica.
16th Workshop on Hot Topics in Operating Systems (HotOS XVI), Whistler, Canada, 2017.

How to escape saddle points efficiently.
C. Jin, R. Ge, P. Netrapalli, S. Kakade, and M. I. Jordan.
In D. Precup and Y. W. Teh (Eds),
Proceedings of the 34th International Conference on Machine
Learning (ICML), Sydney, Australia, 2017.

Breaking locality accelerates block GaussSeidel.
S. Tu, S. Venkataraman, A. Wilson, A. Gittens, M. I. Jordan, and B. Recht.
In D. Precup and Y. W. Teh (Eds),
Proceedings of the 34th International Conference on Machine
Learning (ICML), Sydney, Australia, NY, 2017.

Deep transfer learning with joint adaptation networks.
M. Long, H. Zhu, J. Wang, and M. I. Jordan
Proceedings of the 34th International Conference on Machine
Learning (ICML), Sydney, Australia, 2017.

Optimal prediction for sparse linear models? Lower bounds for
coordinateseparable Mestimators.
Y. Zhang, M. Wainwright, and M. I. Jordan.
Electronic Journal of Statistics, 11, 752799, 2017.

QuTE algorithms for decentralized decision making on networks with false
discovery rate control.
A. Ramdas, J. Chen, M. Wainwright, and M. I. Jordan.
56th IEEE Conference on Decision and Control, 2017.

Less than a single pass: Stochastically controlled stochastic gradient.
Lei, L., and M. I. Jordan.
In A. Singh and J. Zhu (Eds.),
Proceedings of the Twentieth Conference on Artificial
Intelligence and Statistics (AISTATS), 2017.
[Supplementary info]

On the learnability of fullyconnected neural networks.
Y. Zhang, J. Lee, M. Wainwright, and M. I. Jordan.
In A. Singh and J. Zhu (Eds.),
Proceedings of the Twentieth Conference on Artificial
Intelligence and Statistics (AISTATS), 2017.

Gradient descent can take exponential time to escape saddle points.
S. Du, C. Jin, J. Lee, M. I. Jordan, B. Poczos, and A. Singh.
In S. Bengio, R. Fergus, S. Vishwanathan, and H. Wallach (Eds),
Advances in Neural Information Processing Systems (NIPS) 30, 2017.

Nonconvex finitesum optimization via SCSG methods.
L. Lei, C. Ju, J. Chen, and M. I. Jordan.
In S. Bengio, R. Fergus, S. Vishwanathan, and H. Wallach (Eds),
Advances in Neural Information Processing Systems (NIPS) 30, 2017.

Online control of the false discovery rate with decaying memory.
A. Ramdas, F. Yang, M. Wainwright, and M. I. Jordan.
In S. Bengio, R. Fergus, S. Vishwanathan, and H. Wallach (Eds),
Advances in Neural Information Processing Systems (NIPS) 30, 2017.

Fast blackbox variational inference through stochastic trustregion optimization.
J. Regier, M. I. Jordan, and J. McAuliffe.
In S. Bengio, R. Fergus, S. Vishwanathan, and H. Wallach (Eds),
Advances in Neural Information Processing Systems (NIPS) 30, 2017.

Kernel feature selection via conditional covariance minimization.
J. Chen, M. Stern, M. Wainwright, and M. I. Jordan.
In S. Bengio, R. Fergus, S. Vishwanathan, and H. Wallach (Eds),
Advances in Neural Information Processing Systems (NIPS) 30, 2017.

A marked Poisson process driven latent shape model for 3D segmentation of
reflectance confocal microscopy image stacks of human skin.
S. Ghanta, M. I. Jordan, K. Kose, D. Brooks, J. Rajadhyaksha, and J. Dy.
IEEE Transactions on Image Processing, 26, 172184, 2017.

Distributed optimization with arbitrary local solvers.
C. Ma, J. Konecny, M. Jaggi, V. Smith, M. I. Jordan, P Richtarik, and M. Takac.
Optimization Methods and Software, 4, 813848, 2017.
[Most Read Paper Award].
2016

A variational perspective on accelerated methods in optimization.
A. Wibisono, A. Wilson, and M. I. Jordan.
Proceedings of the National Academy of Sciences, 133, E7351E7358, 2016.
[ArXiv version]

On the computational complexity of highdimensional Bayesian variable selection.
Y. Yang, M. Wainwright, and M. I. Jordan.
Annals of Statistics, 44, 24972532, 2016.

Fast measurements of robustness to changing priors in variational Bayes.
R. Giordano, T. Broderick, and M. I. Jordan.
arXiv:1611.07649, 2016.

Fast robustness quantification with variational Bayes.
R. Giordano, T. Broderick, R. Meager, J. Huggins, and M. I. Jordan.
arXiv:1606.07153, 2016.

A constructive definition of the beta process.
J. Paisley and M. I. Jordan.
arXiv:1604.00685, 2016.

Universality of Mallows' and degeneracy of Kendall's kernels for rankings.
H. Mania, A. Ramdas, M. Wainwright, M. I. Jordan and B. Recht.
arXiv:1603.04245, 2016.

Spectral methods meet EM: A provably optimal algorithm for crowdsourcing.
Y. Zhang, X. Chen, D. Zhou, and M. I. Jordan.
Journal of Machine Learning Research, 101, 144, 2016.

Gradient descent converges to minimizers.
J. Lee, M. Simchowitz, M. I. Jordan, and B. Recht.
Proceedings of the Conference on Learning Theory (COLT),
New York, NY, 2016.

Asymptotic behavior of l_pbased Laplacian regularization in
semisupervised learning.
A. El Alaoui, X. Cheng, A. Ramdas, M. Wainwright and M. I. Jordan.
Proceedings of the Conference on Learning Theory (COLT),
New York, NY, 2016.

A kernelized Stein discrepancy for goodnessoffit tests and model evaluation.
Q. Liu, J. Lee, and M. I. Jordan.
Proceedings of the 33rd International Conference on Machine
Learning (ICML), New York, NY, 2016.

l_1regularized neural networks are improperly learnable in polynomial time.
Y. Zhang, J. Lee, and M. I. Jordan.
Proceedings of the 33rd International Conference on Machine
Learning (ICML), New York, NY, 2016.

A linearlyconvergent stochastic LBFGS algorithm.
P. Moritz, R. Nishihara, and M. I. Jordan.
Proceedings of the Eighteenth Conference on Artificial
Intelligence and Statistics (AISTATS), Cadiz, Spain, 2016.

Highdimensional continuous control using generalized advantage estimation.
J. Schulman, P. Moritz, S. Levine, M. I. Jordan, and P. Abbeel.
International Conference on Learning Representations (ICLR),
Puerto Rico, 2016.

SparkNet: Training deep networks in Spark.
P. Moritz, R. Nishihara, I. Stoica and M. I. Jordan.
International Conference on Learning Representations (ICLR),
Puerto Rico, 2016.

The constrained Laplacian rank algorithm for graphbased clustering.
F. Nie, X. Wang, M. I. Jordan, H. Huang.
In Proceedings of the Thirtieth Conference on Artificial Intelligence (AAAI),
Phoenix, AZ, 2016.

CYCLADES: Conflictfree asynchronous machine learning.
X. Pan, M. Lam, S. Tu, D. Papailiopoulos, C. Zhang, M. I. Jordan,
K. Ramchandran, C. Re, and B. Recht.
In U. von Luxburg, I. Guyon, D. Lee, M. Sugiyama (Eds.),
Advances in Neural Information Processing Systems (NIPS) 29, 2016.

Local maxima in the likelihood of Gaussian mixture models:
Structural results and algorithmic consequences.
C. Jin, Y. Zhang, S. Balakrishnan, M. Wainwright, and M. I. Jordan
In U. von Luxburg, I. Guyon, D. Lee, M. Sugiyama (Eds.),
Advances in Neural Information Processing Systems (NIPS) 29, 2016.

Unsupervised domain adaptation with residual transfer networks.
M. Long, H. Zhu, J. Wang, and M. I. Jordan
In U. von Luxburg, I. Guyon, D. Lee, M. Sugiyama (Eds.),
Advances in Neural Information Processing Systems (NIPS) 29, 2016.
2015

Machine learning: Trends, perspectives, and prospects.
M. I. Jordan and T. Mitchell.
Science, 349, 255260, 2015.

Nested hierarchical Dirichlet processes.
J. Paisley, C. Wang, D. Blei, and M. I. Jordan.
IEEE Transactions on Pattern Analysis and Machine Intelligence,
37, 256270, 2015.

Combinatorial clustering and the beta negative binomial process.
T. Broderick, L. Mackey, J. Paisley and M. I. Jordan.
IEEE Transactions on Pattern Analysis and Machine Intelligence,
37, 290306, 2015.

Distributed matrix completion and robust factorization.
L. Mackey, A. Talwalkar and M. I. Jordan.
Journal of Machine Learning Research, 16, 913960, 2015.

Optimal rates for zeroorder optimization: the power of two function evaluations.
J. Duchi, M. I. Jordan, M. Wainwright, and A. Wibisono.
IEEE Transactions on Information Theory, 61, 27882806, 2015.

Robust inference with variational Bayes.
R. Giordano, T. Broderick, and M. I. Jordan.
arXiv:1512.02578, 2015.

Learning halfspaces and neural networks with random initialization.
Y. Zhang, J. Lee, M. Wainwright and M. I. Jordan.
arXiv:1511.07948, 2015.

Asynchronous complex analytics in a distributed dataflow architecture.
J. Gonzalez, P. Bailis, M. I. Jordan, M. Franklin, J. Hellerstein, A. Ghodsi, and I. Stoica.
arXiv:1510.07092, 2015.

Splash: Userfriendly programming interface for parallelizing stochastic algorithms.
Y. Zhang and M. I. Jordan.
arXiv:1506.07552, 2015.

Trust region policy optimization.
J. Schulman, P. Moritz, S. Levine, M. I. Jordan, and P. Abbeel.
In F. Bach and D. Blei (Eds.),
Proceedings of the 32nd International Conference on Machine
Learning (ICML), Lille, France, 2015.
[Long version]

Adding vs. averaging in distributed primaldual optimization.
C. Ma, V. Smith, M. Jaggi, M. I. Jordan, P. Richtarik, and M. Takac.
In F. Bach and D. Blei (Eds.),
Proceedings of the 32nd International Conference on Machine
Learning (ICML), Lille, France, 2015.
[Long version]

A general analysis of the convergence of ADMM.
R. Nishihara, L. Lessart, B. Recht, A. Packard, and M. I. Jordan.
In F. Bach and D. Blei (Eds.),
Proceedings of the 32nd International Conference on Machine
Learning (ICML), Lille, France, 2015.
[Long version]

Learning transferable features with deep adaptation networks.
M. Long, Y. Cao, J. Wang, and M. I. Jordan.
In F. Bach and D. Blei (Eds.),
Proceedings of the 32nd International Conference on Machine
Learning (ICML), Lille, France, 2015.

Distributed estimation of generalized matrix rank: Efficient algorithms and lower bounds.
Y. Zhang, M. Wainwright, and M. I. Jordan.
In F. Bach and D. Blei (Eds.),
Proceedings of the 32nd International Conference on Machine
Learning (ICML), Lille, France, 2015.

Automating model search for large scale machine learning.
E. Sparks, A. Talwalkar, D. Haas, M. Franklin, M. I. Jordan, and T. Kraska.
ACM Symposium on Cloud Computing (SOCC), Kohala Coast, Hawaii, 2015.

TuPAQ: An efficient planner for largescale predictive analytic queries.
E. Sparks, A. Talwalkar, M. J. Franklin, M. I. Jordan, and T. Kraska.
arXiv:1502.00068, 2015.

Parallel correlation clustering on big graphs.
X. Pan, D. Papailiopoulos, S. Oymak, B. Recht, K. Ramchandran, and M. I. Jordan.
In D. Lee, M. Sugiyama, C. Cortes and N. Lawrence (Eds.),
Advances in Neural Information Processing Systems (NIPS) 28, 2015.

On the accuracy of selfnormalized loglinear models.
J. Andreas, M. Rabinovich, D. Klein, and M. I. Jordan.
In D. Lee, M. Sugiyama, C. Cortes and N. Lawrence (Eds.),
Advances in Neural Information Processing Systems (NIPS) 28, 2015.

Variational consensus Monte Carlo.
M. Rabinovich, E. Angelino, and M. I. Jordan.
In D. Lee, M. Sugiyama, C. Cortes and N. Lawrence (Eds.),
Advances in Neural Information Processing Systems (NIPS) 28, 2015.

Linear response methods for accurate covariance estimates from mean field
variational Bayes.
R. Giordano, T. Broderick, and M. I. Jordan.
In D. Lee, M. Sugiyama, C. Cortes and N. Lawrence (Eds.),
Advances in Neural Information Processing Systems (NIPS) 28, 2015.

Optimismdriven exploration for nonlinear systems.
T. Moldovan, S. Levine, M. I. Jordan, and P. Abbeel.
In IEEE International Conference on Robotics and Automation (ICRA),
Seattle, WA, 2015.
2014

Matrix concentration inequalities via the method of exchangeable pairs.
L. Mackey, M. I. Jordan, R. Y. Chen, B. Farrell and J. A. Tropp.
Annals of Probability, 42, 906945, 2014.

A scalable bootstrap for massive data.
A. Kleiner, A. Talwalkar, P. Sarkar and M. I. Jordan.
Journal of the Royal Statistical Society, Series B,
76, 795816, 2014.

Privacy aware learning.
J. Duchi, M. I. Jordan, and M. Wainwright.
Journal of the ACM, 61, http://dx.doi.org/10.1145/2666468, 2014.

Joint modeling of multiple time series via the beta process with
application to motion capture segmentation.
E. Fox, M. Hughes, E. Sudderth, and M. I. Jordan.
Annals of Applied Statistics, 8, 12811313, 2014.

Nonparametric link prediction in large scale dynamic networks.
P. Sarkar, D. Chakrabarti, and M. I. Jordan.
Electronic Journal of Statistics, 8, 20222065, 2014.

Particle Gibbs with ancestral sampling.
F. Lindsten, M. I. Jordan, and T. Schön.
Journal of Machine Learning Research,
15, 21452184, 2014.

Iterative discovery of multiple alternative clustering views.
D. Niu, J. Dy, and M. I. Jordan.
IEEE Transactions on Pattern Analysis and Machine Intelligence,
36, 13401353, 2014.

Matrixvariate Dirichlet process priors with applications.
Z. Zhang, D. Wang, G. Dai, and M. I. Jordan.
Bayesian Analysis, 9, 259286, 2014.

SMASH: A benchmarking toolkit for variant calling.
A. Talwalkar, J. Liptrap, J. Newcomb, C. Hartl, J. Terhorst, K. Curtis, M Bresler,
Y. Song, M. I. Jordan, and D. Patterson.
Bioinformatics,
DOI:10.1093/bioinformatics/btu345, 2014.

Optimality guarantees for distributed statistical estimation.
J. Duchi, M. I. Jordan, M. Wainwright, and Y. Zhang.
arXiv:1405.0782, 2014.

The missing piece in complex analytics: Low latency, scalable model
management and serving with Velox.
D. Crankshaw, P. Bailis, J. E. Gonzalez, H. Li, Z. Zhang, M. J. Franklin,
A. Ghodsi, and M. I. Jordan.
Conference on Innovative Data Systems Research (CIDR),
Asilomar, CA, 2014.

Lower bounds on the performance of polynomialtime algorithms
for sparse linear regression.
Y. Zhang, M. Wainwright, and M. I. Jordan.
Proceedings of the Conference on Learning Theory (COLT),
Barcelona, Spain, 2014.

Knowing when you're wrong: Building fast and reliable approximate
query processing systems.
S. Agarwal, H. Milner, A. Kleiner, B. Mozafari, M. I. Jordan,
S. Madden, and I. Stoica.
Proceedings of the 2014 ACM International Conference on Management
of Data (SIGMOD), Snowbird, Utah, 2014.

Scaling a crowdsourced database.
B. Mozafari, P. Sarkar, M. Franklin, M. I. Jordan, and S. Madden.
Proceedings of the 41st International Conference on Very Large Data Bases (VLDB),
Hawaii, USA, 2014.

Changepoint analysis for efficient variant calling.
A. Bloniarz, A. Talwalkar, J. Terhorst, M. I. Jordan, D. Patterson,
B. Yu, and Y. Song.
International Conference on Research in Computational
Molecular Biology (RECOMB), Pittsburgh, PA, 2014.

Mixed membership models for time series.
E. Fox and M. I. Jordan.
In E. Airoldi, D. Blei, E. A. Erosheva, and S. E. Fienberg (Eds.),
Handbook of Mixed Membership Models and Their Applications,
Chapman & Hall/CRC, 2014.

Mixed membership matrix factorization.
L. Mackey, D. Weiss, and M. I. Jordan.
In E. Airoldi, D. Blei, E. A. Erosheva, and S. E. Fienberg (Eds.),
Handbook of Mixed Membership Models and Their Applications,
Chapman & Hall/CRC, 2014.

Bayesian nonnegative matrix factorization with stochastic variational inference.
J. Paisley, D. Blei, and M. I. Jordan.
In E. Airoldi, D. Blei, E. A. Erosheva, and S. E. Fienberg (Eds.),
Handbook of Mixed Membership Models and Their Applications,
Chapman & Hall/CRC, 2014.

Spectral methods meet EM: A provably optimal algorithm for crowdsourcing.
Y. Zhang, X. Chen, D. Zhou, and M. I. Jordan.
In Z. Ghahramani, M. Welling, C. Cortes and N. Lawrence (Eds.),
Advances in Neural Information Processing Systems (NIPS) 27, 2014.

On the convergence rate of decomposable submodular function minimization.
R. Nishihara, S. Jegelka, and M. I. Jordan.
In Z. Ghahramani, M. Welling, C. Cortes and N. Lawrence (Eds.),
Advances in Neural Information Processing Systems (NIPS) 27, 2014.

Communicationefficient distributed dual coordinate ascent.
M. Jaggi, V. Smith, M. Takac, J. Terhorst, T. Hofmann, and M. I. Jordan.
In Z. Ghahramani, M. Welling, C. Cortes and N. Lawrence (Eds.),
Advances in Neural Information Processing Systems (NIPS) 27, 2014.

Parallel double greedy submodular maximization.
X. Pan, S. Jegelka, J. Gonzalez, J. Bradley, and M. I. Jordan.
In Z. Ghahramani, M. Welling, C. Cortes and N. Lawrence (Eds.),
Advances in Neural Information Processing Systems (NIPS) 27, 2014.
2013

Learning dependencybased compositional semantics.
P. Liang, M. I. Jordan, and D. Klein.
Computational Linguistics, 39, 389446, 2013.

Computational and statistical tradeoffs via convex relaxation.
V. Chandrasekaran and M. I. Jordan.
Proceedings of the National Academy of Sciences, 110, E1181E1190, 2013.

Feature allocations, probability functions, and paintboxes.
T. Broderick, J. Pitman, and M. I. Jordan.
Bayesian Analysis, 8, 801836, 2013.

On statistics, computation and scalability.
M. I. Jordan.
Bernoulli, 19, 13781390, 2013.

The asymptotics of ranking algorithms.
J. Duchi, L. Mackey, and M. I. Jordan.
Annals of Statistics, 4, 22922323, 2013.

Evolutionary inference via the Poisson indel process.
A. BouchardCôté and M. I. Jordan.
Proceedings of the National Academy of Sciences, 110, 11601166, 2013.

Clusters and features from combinatorial stochastic processes.
T. Broderick, M. I. Jordan, and J. Pitman.
Statistical Science, 28, 289312, 2013.

Bayesian semiparametric Wiener system identification.
F. Lindsten, T. Schön, and M. I. Jordan.
Automatica, 49, 20532063, 2013.

Cluster forests.
D. Yan, A. Chen, and M. I. Jordan.
Computational Statistics and Data Analysis, 66, 178192, 2013.

Molecular function prediction for a family exhibiting evolutionary tendencies
towards substrate specificity swapping: Recurrence of tyrosine aminotransferase
activity in the I$\alpha$ subfamily.
K. Muratore, B. Engelhardt, J. Srouji, M. I. Jordan, S. Brenner, and J. Kirsch.
Proteins: Structure, Function, and Bioinformatics, DOI:10.1002/prot.24318, 2013.

Local privacy, data processing inequalities, and statistical minimax rates.
J. Duchi, M. I. Jordan, and M. Wainwright.
arXiv:1302.3203, 2013.

MLI: An API for distributed machine learning.
E. Sparks, A. Talwalkar, V. Smith, J. Kottalam, X. Pan, J. Gonzalez, M. I. Jordan,
M. Franklin, and T. Kraska. IEEE International Conference on Data Mining (ICDM),
Dallas, TX, 2013.

MADBayes: MAPbased asymptotic derivations from Bayes.
T. Broderick, B. Kulis, and M. I. Jordan.
In S. Dasgupta and D. McAllester (Eds.),
Proceedings of the 30th International Conference on Machine
Learning (ICML), Atlanta, GA, 2013.
[Supplementary information].

Efficient ranking from pairwise comparisons.
F. Wauthier, M. I. Jordan, and N. Jojic.
In S. Dasgupta and D. McAllester (Eds.),
Proceedings of the 30th International Conference on Machine
Learning (ICML), Atlanta, GA, 2013.
[Supplementary information].

Distributed lowrank subspace segmentation.
L. Mackey, A. Talwalkar, Y. Mu, SF. Chang, and M. I. Jordan.
IEEE International Conference on Computer Vision (ICCV), Sydney, Australia, 2013.

A general bootstrap performance diagnostic.
A. Kleiner, A. Talwalkar, S. Agarwal, M. I. Jordan, and I. Stoica.
ACM Conference on Knowledge Discovery and Data Mining (SIGKDD), Chicago, IL, 2013.

Local privacy and minimax bounds: Sharp rates for probability estimation.
J. Duchi, M. I. Jordan, and M. Wainwright.
arXiv:1305.6000, 2013.

Optimistic concurrency control for distributed unsupervised learning.
X. Pan, J. Gonzalez, S. Jegelka, T. Broderick, and M. I. Jordan.
In L. Bottou, C. Burges, Z. Ghahramani, and M. Welling (Eds.),
Advances in Neural Information Processing Systems (NIPS) 26, 2013.

Informationtheoretic lower bounds for distributed statistical estimation
with communication constraints.
Y. Zhang, J. Duchi, M. I. Jordan, and M. Wainwright.
In L. Bottou, C. Burges, Z. Ghahramani, and M. Welling (Eds.),
Advances in Neural Information Processing Systems (NIPS) 26, 2013.

Estimation, optimization, and parallelism when data is sparse.
J. Duchi, M. I. Jordan, and B. McMahan.
In L. Bottou, C. Burges, Z. Ghahramani, and M. Welling (Eds.),
Advances in Neural Information Processing Systems (NIPS) 26, 2013.

Streaming variational Bayes.
T. Broderick, N. Boyd, A. Wibisono, A. Wilson and M. I. Jordan.
In L. Bottou, C. Burges, Z. Ghahramani, and M. Welling (Eds.),
Advances in Neural Information Processing Systems (NIPS) 26, 2013.

Local privacy and minimax bounds: Sharp rates for probability estimation.
J. Duchi, M. I. Jordan, and M. Wainwright
In L. Bottou, C. Burges, Z. Ghahramani, and M. Welling (Eds.),
Advances in Neural Information Processing Systems (NIPS) 26, 2013.

A comparative framework for preconditioned Lasso algorithms.
F. Wauthier, N. Jojic and M. I. Jordan.
In L. Bottou, C. Burges, Z. Ghahramani, and M. Welling (Eds.),
Advances in Neural Information Processing Systems (NIPS) 26, 2013.
2012

Phylogenetic inference via sequential Monte Carlo.
A. BouchardCôté, S. Sankararaman, and M. I. Jordan.
Systematic Biology, 61, 579593, 2012.

Ergodic mirror descent.
J. C. Duchi, A. Agarwal, M. Johansson, and M. I. Jordan.
SIAM Journal of Optimization, 22, 15491578, 2012.

EPGIG priors and applications in Bayesian sparse learning.
Z. Zhang, S. Wang, D. Liu, and M. I. Jordan.
Journal of Machine Learning Research, 13, 20312061, 2012.

Beta processes, stickbreaking, and power laws.
T. Broderick, M. I. Jordan and J. Pitman.
Bayesian Analysis, 7, 439476, 2012.

Coherence functions with applications in largemargin classification methods.
Z. Zhang, D. Liu, G. Dai, and M. I. Jordan.
Journal of Machine Learning Research, 13, 27052734, 2012.

A million cancer genome warehouse.
D. Haussler, D. A. Patterson, M. Diekhans, A. Fox, M. I. Jordan, A. D. Joseph,
S. Ma, B. Paten, S. Shenker, T. Sittler and I. Stoica.
Technical Report UCB/EECS2012211, Department of EECS,
University of California, Berkeley, 2012.

Active learning for crowdsourced databases.
B. Mozafari, P. Sarkar, M. J. Franklin, M. I. Jordan, and S. Madden.
arXiv:1209.3686, 2012.

The Big Data bootstrap.
A. Kleiner, A. Talwalkar, P. Sarkar, and M. I. Jordan.
In J. Langford and J. Pineau (Eds.),
Proceedings of the 29th International Conference on Machine
Learning (ICML), Edinburgh, UK, 2012.

Revisiting kmeans: New algorithms via Bayesian nonparametrics.
B. Kulis and M. I. Jordan.
In J. Langford and J. Pineau (Eds.),
Proceedings of the 29th International Conference on Machine
Learning (ICML), Edinburgh, UK, 2012.

Variational Bayesian inference with stochastic search.
J. Paisley, D. Blei, and M. I. Jordan.
In J. Langford and J. Pineau (Eds.),
Proceedings of the 29th International Conference on Machine
Learning (ICML), Edinburgh, UK, 2012.

Nonparametric link prediction in dynamic networks.
P. Sarkar, D. Chakrabarti, and M. I. Jordan.
In J. Langford and J. Pineau (Eds.),
Proceedings of the 29th International Conference on Machine
Learning (ICML), Edinburgh, UK, 2012.
[Appendix].

Stickbreaking beta processes and the Poisson process.
J. Paisley, D. Blei, and M. I. Jordan.
In N. Lawrence and M. Girolami (Eds.),
Proceedings of the Fifteenth Conference on Artificial Intelligence
and Statistics (AISTATS), Canary Islands, Spain, 2012.

A semiparametric Bayesian approach to Wiener system identification.
F. Lindsten, T. Schön, and M. I. Jordan.
16th IFAC Symposium on System Identification (SYSID), Brussels, Belgium, 2012.

Active spectral clustering via iterative uncertainty reduction.
F. Wauthier, N. Jojic, and M. I. Jordan.
18th ACM Conference on Knowledge Discovery and Data Mining
(SIGKDD), Beijing, China, 2012.

Smallvariance asymptotics for exponential family Dirichlet process mixture models.
K. Jiang, B. Kulis, and M. I. Jordan.
In P. Bartlett, F. Pereira, L. Bottou and C. Burges (Eds.),
Advances in Neural Information Processing Systems (NIPS) 25, 2012.

Ancestral sampling for particle Gibbs.
F. Lindsten, M. I. Jordan, and T. Schön.
In P. Bartlett, F. Pereira, L. Bottou and C. Burges (Eds.),
Advances in Neural Information Processing Systems (NIPS) 25, 2012.

Finite sample convergence rates of zeroorder stochastic optimization methods.
J. Duchi, M. I. Jordan, M. Wainwright, and A. Wibisono.
In P. Bartlett, F. Pereira, L. Bottou and C. Burges (Eds.),
Advances in Neural Information Processing Systems (NIPS) 25, 2012.

Privacy aware learning.
J. Duchi, M. I. Jordan, and M. Wainwright.
In P. Bartlett, F. Pereira, L. Bottou and C. Burges (Eds.),
Advances in Neural Information Processing Systems (NIPS) 25, 2012.
[Long version].
2011

Union support recovery in highdimensional multivariate regression.
G. Obozinski, M. J. Wainwright, and M. I. Jordan.
Annals of Statistics, 39, 147, 2011.

Bayesian inference for queueing networks and modeling of Internet services.
C. Sutton and M. I. Jordan.
Annals of Applied Statistics, 5, 254282, 2011.

Genomescale phylogenetic function annotation of large and
diverse protein families.
B. Engelhardt, M. I. Jordan, J. Srouji, and S. Brenner.
Genome Research, 21, 19691980, 2011.

A sticky HDPHMM with application to speaker diarization.
E. Fox, E. Sudderth, M. I. Jordan, and A. Willsky.
Annals of Applied Statistics, 5, 10201056, 2011.

Learning lowdimensional signal models.
L. Carin, R. G. Baraniuk, V. Cevher, D. Dunson, M. I. Jordan, G. Sapiro,
and M. B. Wakin.
IEEE Signal Processing Magazine, 28, 3951, 2011.

Bayesian generalized kernel mixed models.
Z. Zhang, G. Dai, and M. I. Jordan.
Journal of Machine Learning Research, 12, 111139, 2011.

Bayesian nonparametric inference of switching linear dynamical models.
E. Fox, E. Sudderth, M. I. Jordan, and A. Willsky.
IEEE Transactions on Signal Processing, 59, 15691585, 2011.

Nonparametric combinatorial sequence models.
F. Wauthier, M. I. Jordan, and N. Jojic.
Journal of Computational Biology,
18, 16491660, 2011.

The SCADS Director: Scaling a distributed storage system under stringent
performance requirements.
B. Trushkowsky, P. Bodik, A. Fox, M. Franklin, M. I. Jordan, and D. Patterson.
In 9th USENIX Conference on File and Storage Technologies (FAST '11),
San Jose, CA, 2011.

Learning dependencybased compositional semantics.
P. Liang, M. I. Jordan, and D. Klein.
The 49th Annual Meeting of the Association for Computational Linguistics (ACL),
[Long version].

Nonparametric Bayesian coclustering ensembles.
P. Wang, K. B. Laskey, C. Domeniconi, and M. I. Jordan.
SIAM International Conference on Data Mining (SDM), Phoenix, AZ, 2011.

Dimensionality reduction for spectral clustering.
D. Niu, J. Dy, and M. I. Jordan.
In G. Gordon and D. Dunson (Eds.),
Proceedings of the Fourteenth Conference on Artificial Intelligence
and Statistics (AISTATS), Ft. Lauderdale, FL, 2011.

Nonparametric combinatorial sequence models.
F. Wauthier, M. I. Jordan, and N. Jojic.
15th Annual International Conference on Research in Computational Molecular Biology (RECOMB),
Vancouver, BC, 2011.

Message from the President: Visualizing Bayesians.
M. I. Jordan.
ISBA Bulletin, 18(3), 12, 2011.

Supervised hierarchical PitmanYor process for natural scene segmentation.
A. Shyr, T. Darrell, M. I. Jordan, and R. Urtasun.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
Colorado Springs, CO, 2011.

A unified probabilistic model for global and local unsupervised feature selection.
Y. Guan, J. Dy, and M. I. Jordan.
In L. Getoor and T. Scheffer (Eds.),
Proceedings of the 28th International Conference on Machine
Learning (ICML), Bellevue, WA, 2011.

Message from the President: The era of Big Data.
M. I. Jordan.
ISBA Bulletin, 18(2), 13, 2011.

Managing data transfers in computer clusters with Orchestra.
M. Chowdhury, M. Zaharia, J. Ma, M. I. Jordan, and I. Stoica (2011).
ACM SIGCOMM, Toronto, Canada, 2011.

Visually relating gene expression and in vivo DNA binding data.
M.Y. Huang, L. Mackey, S. Keranen, G. Weber, M. I. Jordan, D. Knowles,
M. Biggin, and B. Hamann.
IEEE International Conference on Bioinformatics and Biomedicine (IEEE BIBM)
Atlanta, GA, 2011.

Message from the President: What are the open problems in Bayesian statistics?
M. I. Jordan.
ISBA Bulletin, 18(1), 14, 2011.

Ergodic subgradient descent.
J. C. Duchi, A. Agarwal, M. Johansson, and M. I. Jordan.
FortyNinth Annual Allerton Conference on Communication,
Control, and Computing, UrbanaChampaign, IL, 2011.

Bayesian bias mitigation for crowdsourcing.
F. L. Wauthier and M. I. Jordan.
In J. ShaweTaylor, R. Zemel, P. Bartlett and F. Pereira (Eds.),
Advances in Neural Information Processing Systems (NIPS) 24, 2011.

Divideandconquer matrix factorization.
L. Mackey, A. Talwalkar and M. I. Jordan.
In J. ShaweTaylor, R. Zemel, P. Bartlett and F. Pereira (Eds.),
Advances in Neural Information Processing Systems (NIPS) 24, 2011.
[Long version].
2010

Bayesian nonparametric learning: Expressive priors for intelligent systems.
M. I. Jordan.
In R. Dechter, H. Geffner, and J. Halpern (Eds.),
Heuristics, Probability and Causality: A Tribute to Judea Pearl,
College Publications, 2010.

Hierarchical models, nested models and completely random measures.
M. I. Jordan.
In M.H. Chen, D. Dey, P. Mueller, D. Sun, and K. Ye (Eds.),
Frontiers of Statistical Decision Making and Bayesian
Analysis: In Honor of James O. Berger,
New York: Springer, 2010.

Feature space resampling for protein conformational search.
B. Blum, M. I. Jordan, and D. Baker.
Proteins: Structure, Function, and Bioinformatics,
78, 15831593, 2010.
[Supplementary information].

Neighbordependent Ramachandran probability distributions of amino acids
developed from a hierarchical Dirichlet process model.
D. Ting, G. Wang, M. Shapovalov, R. Mitra, M. I. Jordan, and R. Dunbrack.
PLoS Computational Biology, 6, e1000763, 2010.

The nested Chinese restaurant process and Bayesian inference of topic hierarchies.
D. M. Blei, T. Griffiths, and M. I. Jordan.
Journal of the ACM, 57, 130, 2010.
[Software].

Estimating divergence functionals and the likelihood ratio by convex
risk minimization.
X. Nguyen, M. J. Wainwright and M. I. Jordan.
IEEE Transactions on Information Theory, 56, 58475861, 2010.

Joint covariate selection and joint subspace selection
for multiple classification problems.
G. Obozinski, B. Taskar, and M. I. Jordan.
Statistics and Computing, 20, 231252, 2010.

Convex and seminonnegative matrix factorizations.
C. Ding, T. Li, and M. I. Jordan.
IEEE Transactions on Pattern Analysis and Machine Intelligence,
32, 4555, 2010.

Active site prediction using evolutionary and structural information.
S. Sankararaman, F. Sha, J. Kirsch, M. I. Jordan, and K. Sjolander.
Bioinformatics, 26, 617624, 2010.

Regularized discriminant analysis, ridge regression and beyond.
Z. Zhang, G. Dai, C. Xu, and M. I. Jordan.
Journal of Machine Learning Research, 11, 21412170, 2010.

Bayesian nonparametric methods for learning Markov switching processes.
E. Fox, E. Sudderth, M. I. Jordan, and A. Willsky.
IEEE Signal Processing Magazine, 27, 4354, 2010.

Leo Breiman.
M. I. Jordan.
Annals of Applied Statistics, 4, 16421643, 2010.

Hierarchical Bayesian nonparametric models with applications.
Y. W. Teh and M. I. Jordan.
In N. Hjort, C. Holmes, P. Mueller, and S. Walker (Eds.),
Bayesian Nonparametrics: Principles and Practice,
Cambridge, UK: Cambridge University Press, 2010.

Probabilistic grammars and hierarchical Dirichlet processes.
P. Liang, M. I. Jordan, and D. Klein.
In T. O'Hagan and M. West (Eds.),
The Handbook of Applied Bayesian Analysis,
Oxford University Press, 2010.

Nonparametrics and graphical models: Discussion of Ickstadt et al.
M. I. Jordan.
In: J. M. Bernardo, M. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman,
A. F. M. Smith, and M. West (Eds.), Bayesian Statistics 9, 2010.

An analysis of the convergence of graph Laplacians.
D. Ting, L. Huang, and M. I. Jordan.
Proceedings of the 27th International Conference on Machine
Learning (ICML), Haifa, Israel, 2010.

Multiple nonredundant spectral clustering views.
D. Niu, J. Dy, and M. I. Jordan.
Proceedings of the 27th International Conference on Machine
Learning (ICML), Haifa, Israel, 2010.

On the consistency of ranking algorithms.
J. Duchi, L. Mackey, and M. I. Jordan.
Proceedings of the 27th International Conference on Machine
Learning (ICML), Haifa, Israel, 2010.
[Best Student Paper Award].

Mixed membership matrix factorization.
L. Mackey, D. Weiss, and M. I. Jordan.
Proceedings of the 27th International Conference on Machine
Learning (ICML), Haifa, Israel, 2010.
[Software].

Learning programs: A hierarchical Bayesian approach.
P. Liang, M. I. Jordan, and D. Klein.
Proceedings of the 27th International Conference on Machine
Learning (ICML), Haifa, Israel, 2010.

Detecting largescale system problems by mining console logs.
W. Xu, L. Huang, A. Fox, D. Patterson, and M. I. Jordan.
Proceedings of the 27th International Conference on Machine
Learning (ICML), Haifa, Israel, 2010.

Modeling events with cascades of Poisson processes.
A. Simma and M. I. Jordan.
In Uncertainty in Artificial Intelligence (UAI),
Proceedings of the TwentySixth Conference, Catalina Island, CA, 2010.

Matrixvariate Dirichlet process mixture models.
Z. Zhang, G. Dai, and M. I. Jordan.
Proceedings of the Thirteenth Conference on Artificial Intelligence
and Statistics (AISTATS), Sardinia, Italy, 2010.

Inference and learning in networks of queues.
C. Sutton and M. I. Jordan.
Proceedings of the Thirteenth Conference on Artificial Intelligence
and Statistics (AISTATS), Sardinia, Italy, 2010.

Bayesian generalized kernel models.
Z. Zhang, G. Dai, D. Wang, and M. I. Jordan.
Proceedings of the Thirteenth Conference on Artificial Intelligence
and Statistics (AISTATS), Sardinia, Italy, 2010.

Characterizing, modeling, and generating workload spikes for stateful services.
P. Bodik, A. Fox, M. Franklin, M. I. Jordan, and D. Patterson.
First ACM Symposium on Cloud Computing (SOCC),
Indianapolis, IN, 2010.

Sufficient dimension reduction for visual sequence classification.
A. Shyr, R. Urtasun, and M. I. Jordan.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
San Francisco, CA, 2010.

Typebased MCMC.
P. Liang, M. I. Jordan, and D. Klein.
The 11th Annual Conference of the North American Chapter of the
Association for Computational Linguistics (NAACLHLT),
Los Angeles, CA, 2010.

Variational inference over combinatorial spaces.
A. BouchardCôté and M. I. Jordan.
In J. ShaweTaylor, R. Zemel, J. Lafferty, and C. Williams (Eds.),
Advances in Neural Information Processing Systems (NIPS) 23, 2010.
[Supplementary information].

Random conic pursuit for semidefinite programming.
A. Kleiner, A. Rahimi, and M. I. Jordan.
In J. Lafferty and C. Williams and J. ShaweTaylor and R. Zemel and A. Culotta (Eds.),
Advances in Neural Information Processing Systems (NIPS) 23, 2010.
[Supplementary information].

Heavytailed process priors for selective shrinkage.
F. L. Wauthier and M. I. Jordan.
In J. Lafferty and C. Williams and J. ShaweTaylor and R. Zemel and A. Culotta (Eds.),
Advances in Neural Information Processing Systems (NIPS) 23, 2010.

Treestructured stick breaking for hierarchical data.
R. Adams, Z. Ghahramani, and M. I. Jordan.
In J. Lafferty and C. Williams and J. ShaweTaylor and R. Zemel and A. Culotta (Eds.),
Advances in Neural Information Processing Systems (NIPS) 23, 2010.

Unsupervised kernel dimension reduction.
M. Wang, F. Sha, and M. I. Jordan.
In J. Lafferty and C. Williams and J. ShaweTaylor and R. Zemel and A. Culotta (Eds.),
Advances in Neural Information Processing Systems (NIPS) 23, 2010.
[Supplementary information].
2009

On surrogate loss functions and fdivergences.
X. Nguyen, M. J. Wainwright and M. I. Jordan.
Annals of Statistics, 37, 876904, 2009.

Genomic privacy and the limits of individual detection in a pool.
S. Sankararaman, G. Obozinski, M. I. Jordan, and E. Halperin,
Nature Genetics, 41, 965967, 2009.

Kernel dimension reduction in regression.
K. Fukumizu, F. R. Bach, and M. I. Jordan.
Annals of Statistics, 37, 18711905, 2009.

Joint estimation of gene conversion rates and mean conversion
tract lengths from population SNP data.
J. Yin, M. I. Jordan, and Y. Song.
Bioinformatics, 25, i231i239, 2009.

Nonparametric Bayesian identification of jump systems with sparse dependencies.
E. Fox, E. Sudderth, M. I. Jordan, and A. Willsky.
15th IFAC Symposium on System Identification (SYSID), St. Malo, France, 2009.

Largescale system problems detection by mining console logs.
W. Xu, L. Huang, A. Fox, D. Patterson, and M. I. Jordan.
22nd ACM Symposium on Operating Systems Principles (SOSP),
Big Sky, MT, 2009.

Learning semantic correspondences with less supervision.
P. Liang, M. I. Jordan, and D. Klein.
Proceedings of the 47th Annual Meeting of the Association for
Computational Linguistics (ACL), Singapore, 2009.

Optimization of structured mean field objectives.
A. BouchardCôté and M. I. Jordan.
In Uncertainty in Artificial Intelligence (UAI),
Proceedings of the TwentyFifth Conference, Montreal, Canada, 2009.

Learning from measurements in exponential families.
P. Liang, M. I. Jordan, and D. Klein.
Proceedings of the 26th International Conference on Machine
Learning (ICML), Montreal, Canada, 2009.

Fast approximate spectral clustering.
D. Yan, L. Huang, and M. I. Jordan.
15th ACM Conference on Knowledge Discovery and Data Mining
(SIGKDD), Paris, France, 2009.
[Software].
[Long version].

Coherence functions for multicategory marginbased classification methods.
Z. Zhang, M. I. Jordan, WJ. Li, and DY. Yeung.
Proceedings of the Twelfth Conference on Artificial Intelligence
and Statistics (AISTATS), Clearwater Beach, FL, 2009.

A flexible and efficient algorithm for regularized Fisher discriminant analysis.
Z. Zhang, G. Dai, and M. I. Jordan.
In W. Buntine, M. Grobelnik, D. Mladenic, J. ShaweTaylor (Eds.),
Machine Learning and Knowledge Discovery in Databases:
European Conference (ECML PKDD), Bled, Slovenia, 2009.

Automatic exploration of datacenter performance regimes.
P. Bodik, R. Griffith, C. Sutton, A. Fox, M. I. Jordan, and D. Patterson.
First Workshop on Automated Control for Datacenters and Clouds (ACDC),
Barcelona, Spain, 2009.

Latent variable models for dimensionality reduction.
Z. Zhang and M. I. Jordan.
Proceedings of the Twelfth Conference on Artificial Intelligence
and Statistics (AISTATS), Clearwater Beach, FL, 2009.

Online system problem detection by mining patterns of console logs.
W. Xu, L. Huang, A. Fox, D. Patterson, and M. I. Jordan.
IEEE International Conference on Data Mining (ICDM), Miami, FL, 2009.

Predicting multiple performance metrics for queries: Better decisions enabled
by machine learning.
A. Ganapathi, H. Kuno, U. Dayal, J. Wiener, A. Fox, M. I. Jordan, and D. Patterson.
IEEE International Conference on Data Engineering (ICDE), Shanghai, China, 2009.
[TenYear Influential Paper].

Statistical machine learning makes automatic control practical for
Internet datacenters.
P. Bodik, R. Griffith, C. Sutton, A. Fox, M. I. Jordan, and D. Patterson.
Workshop on Hot Topics in Cloud Computing (HotCloud),
San Diego, CA, 2009.

Sharing features among dynamical systems with beta processes.
E. Fox, E. Sudderth, M. I. Jordan, and A. S. Willsky.
In Y. Bengio, D. Schuurmans, J. Lafferty and C. Williams (Eds.),
Advances in Neural Information Processing Systems (NIPS) 22, 2009.

Nonparametric latent feature models for link prediction.
K. Miller, T. Griffiths, and M. I. Jordan.
In Y. Bengio, D. Schuurmans, J. Lafferty and C. Williams (Eds.),
Advances in Neural Information Processing Systems (NIPS) 22, 2009.

An asymptotic analysis of smooth regularizers.
P. Liang, F. Bach, G. Bouchard, and M. I. Jordan.
In Y. Bengio, D. Schuurmans, J. Lafferty and C. Williams (Eds.),
Advances in Neural Information Processing Systems (NIPS) 22, 2009.
2008

Graphical models, exponential families, and variational inference.
M. J. Wainwright and M. I. Jordan.
Foundations and Trends in Machine Learning, 1, 1305, 2008.
[Substantially revised and expanded version of a 2003 technical report.]

On the inference of ancestries in admixed populations.
S. Sankararaman, G. Kimmel, E. Halperin, and M. I. Jordan.
Genome Research, 18, 668675, 2008.

Multiway spectral clustering: A maximum margin perspective.
Z. Zhang and M. I. Jordan.
Statistical Science, 23, 383403, 2008.

A dual receptor crosstalk model of G proteincoupled signal transduction.
P. Flaherty, M. A. Radhakrishnan, T. Dinh, M. I. Jordan, and A. P. Arkin.
PLoS Computational Biology, 4, e1000185, 2008.

Association mapping and significance estimation via the coalescent.
G. Kimmel, R. Karp, M. I. Jordan, and E. Halperin.
American Journal of Human Genetics, 83, 675683, 2008.

On optimal quantization rules for some sequential decision problems.
X. Nguyen, M. J. Wainwright, and M. I. Jordan.
IEEE Transactions on Information Theory, 54, 32853295, 2008.

Consistent probabilistic outputs for protein function prediction.
G. Obozinski, C. E. Grant, G. R. G. Lanckriet, M. I. Jordan, and W. S. Noble.
Genome Biology, 9, S7, 2008.

Quantitative gene function assignment from genomic datasets in M. musculus.
L. PenaCastillo, et al. Genome Biology, 9, S2, 2008.

Probabilistic inference in queueing networks.
C. A. Sutton and M. I. Jordan.
Workshop on Tackling Computer Systems Problems with Machine
Learning Techniques (SYSML), 2008.

The phylogenetic Indian buffet process: A nonexchangeable nonparametric
prior for latent features.
K. Miller, T. Griffiths and M. I. Jordan.
In Uncertainty in Artificial Intelligence (UAI),
Proceedings of the TwentyFourth Conference, 2008.

An HDPHMM for systems with state persistence.
E. Fox, E. Sudderth, M. I. Jordan, and A. Willsky.
Proceedings of the 25th International Conference on Machine
Learning (ICML), Helsinki, Finland, 2008.

An analysis of generative, discriminative, and pseudolikelihood
estimators. P. Liang and M. I. Jordan.
Proceedings of the 25th International Conference on Machine
Learning (ICML), Helsinki, Finland, 2008.
[Best Student Paper Award].

Nonnegative matrix factorization for combinatorial optimization:
Spectral clustering, graph matching, and clique finding.
C. Ding, T. Li, and M. I. Jordan.
IEEE International Conference on Data Mining (ICDM), 2008.

Mining console logs for largescale system problem detection.
W. Xu, L. Huang, A. Fox, D. Patterson, and M. I. Jordan.
Workshop on Tackling Computer Systems Problems with Machine
Learning Techniques (SYSML), 2008.

Spectral clustering for speech separation.
F. R. Bach and M. I. Jordan.
In J. Keshet and S. Bengio (Eds.),
Automatic Speech and Speaker Recognition: Large Margin and
Kernel Methods. New York: John Wiley, 2008.

Shared segmentation of natural scenes using dependent PitmanYor processes.
E. Sudderth and M. I. Jordan.
In D. Koller, Y. Bengio, D. Schuurmans and L. Bottou (Eds.),
Advances in Neural Information Processing Systems (NIPS) 21, 2008.

Efficient inference in phylogenetic InDel trees.
A. BouchardCôté, M. I. Jordan, and D. Klein.
In D. Koller, Y. Bengio, D. Schuurmans and L. Bottou (Eds.),
Advances in Neural Information Processing Systems (NIPS) 21, 2008.

Highdimensional union support recovery in multivariate regression.
G. Obozinski, M. J. Wainwright and M. I. Jordan.
In D. Koller, Y. Bengio, D. Schuurmans and L. Bottou (Eds.),
Advances in Neural Information Processing Systems (NIPS) 21, 2008.
[Appendix].

Nonparametric Bayesian learning of switching linear dynamical systems.
E. B. Fox, E. Sudderth, M. I. Jordan, and A. S. Willsky.
In D. Koller, Y. Bengio, D. Schuurmans and L. Bottou (Eds.),
Advances in Neural Information Processing Systems (NIPS) 21, 2008.

Spectral clustering with perturbed data.
L. Huang, D. Yan, M. I. Jordan, and N. Taft.
In D. Koller, Y. Bengio, D. Schuurmans and L. Bottou (Eds.),
Advances in Neural Information Processing Systems (NIPS) 21, 2008.
[Long version].

DiscLDA: Discriminative learning for dimensionality reduction and classification.
S. LacosteJulien, F. Sha, and M. I. Jordan.
In D. Koller, Y. Bengio, D. Schuurmans and L. Bottou (Eds.),
Advances in Neural Information Processing Systems (NIPS) 21, 2008.

Posterior consistency of the Silverman gprior in Bayesian model choice.
Z. Zhang and M. I. Jordan.
In D. Koller, Y. Bengio, D. Schuurmans and L. Bottou (Eds.),
Advances in Neural Information Processing Systems (NIPS) 21, 2008.
2007

A direct formulation for sparse PCA using semidefinite programming.
A. d'Aspremont, L. El Ghaoui, M. I. Jordan, and G. R. G. Lanckriet.
SIAM Review, 49, 434448, 2007.
[Winner of the 2008 SIAM Activity Group on Optimization Prize].
[Software].

A randomization test for controlling population stratification in
wholegenome association studies.
G. Kimmel, M. I. Jordan, E. Halperin, R. Shamir, and R. Karp.
American Journal of Human Genetics, 81, 895905, 2007.

Bayesian haplotype inference via the Dirichlet process.
E. P. Xing, M. I. Jordan and R. Sharan.
Journal of Computational Biology, 14, 267284, 2007.

Hierarchical beta processes and the Indian buffet process.
R. Thibaux and M. I. Jordan.
Proceedings of the Tenth Conference on Artificial Intelligence
and Statistics (AISTATS), 2007.

Regression on manifolds using kernel dimension reduction.
J. Nilsson, F. Sha, and M. I. Jordan.
Proceedings of the 24th International Conference on Machine
Learning (ICML), 2007.

The infinite PCFG using hierarchical Dirichlet processes.
P. Liang, S. Petrov, M. I. Jordan, and D. Klein.
Empirical Methods in Natural Language Processing (EMNLP), 2007.

A permutationaugmented sampler for DP mixture models.
P. Liang, M. I. Jordan, and B. Taskar.
Proceedings of the 24th International Conference on Machine
Learning (ICML), 2007.

Nonparametric estimation of the likelihood ratio and divergence functionals.
X. Nguyen, M. J. Wainwright and M. I. Jordan.
International Symposium on Information Theory (ISIT),
Nice, France, 2007.

Learning multiscale representations of natural scenes using
Dirichlet processes.
J. J. Kivinen, E. B. Sudderth, and M. I. Jordan.
IEEE International Conference on Computer Vision (ICCV), 2007.

Communicationefficient online detection of networkwide anomalies.
L. Huang, X. Nguyen, M. Garofalakis, J. M. Hellerstein, M. I. Jordan,
A. Joseph, and N. Taft.
26th Annual IEEE Conference on Computer Communications (INFOCOM'07), 2007.

Image denoising with nonparametric hidden Markov trees.
J. J. Kivinen, E. B. Sudderth, and M. I. Jordan.
IEEE International Conference on Image Processing (ICIP), 2007.

Responsetime modeling for resource allocation and energyinformed SLAs.
P. Bodik, C. Sutton, A. Fox, D. Patterson, and M. I. Jordan.
Workshop on Statistical Learning Techniques for Solving Systems Problems,
Whistler, BC, 2007.

Solving consensus and semisupervised clustering problems using
nonnegative matrix factorization.
T. Li, C. Ding, and M. I. Jordan.
IEEE International Conference on Data Mining (ICDM), 2007.

Feature selection methods for improving protein structure prediction
with Rosetta.
B. Blum, M. I. Jordan, D. Kim, R. Das, P. Bradley, and D. Baker.
In J. Platt, D. Koller, Y. Singer and A. McCallum (Eds.),
Advances in Neural Information Processing Systems (NIPS) 20, 2007.

Agreementbased learning.
P. Liang, D. Klein and M. I. Jordan.
In J. Platt, D. Koller, Y. Singer and A. McCallum (Eds.),
Advances in Neural Information Processing Systems (NIPS) 20, 2007.

Estimating divergence functionals and the likelihood ratio by
penalized convex risk minimization.
X. Nguyen, M. J. Wainwright and M. I. Jordan.
In J. Platt, D. Koller, Y. Singer and A. McCallum (Eds.),
Advances in Neural Information Processing Systems (NIPS) 20, 2007.
2006

Hierarchical Dirichlet processes.
Y. W. Teh, M. I. Jordan, M. J. Beal and D. M. Blei.
Journal of the American Statistical Association, 101, 15661581, 2006.
[Software].

Learning spectral clustering, with application to speech separation.
F. R. Bach and M. I. Jordan.
Journal of Machine Learning Research, 7, 19632001, 2006.

Convexity, classification, and risk bounds.
P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe.
Journal of the American Statistical Association, 101, 138156,
2006.

Logdeterminant relaxation for approximate inference in discrete
Markov random fields.
M. J. Wainwright and M. I. Jordan.
IEEE Transactions on Signal Processing, 54, 20992109, 2006.

Nonparametric empirical Bayes for the Dirichlet process mixture model.
J. D. McAuliffe, D. M. Blei and M. I. Jordan.
Statistics and Computing, 16, 514, 2006.

Structured prediction, dual extragradient and Bregman projections.
B. Taskar, S. LacosteJulien and M. I. Jordan.
Journal of Machine Learning Research, 7, 16271653, 2006.

Mining the Caenorhabditis Genetic Center bibliography for genes
related to life span.
D. M. Blei, M. I. Jordan, and S. Mian.
BMC Bioinformatics, 7, 250269, 2006.

Bayesian multipopulation haplotype inference via a hierarchical
Dirichlet process mixture.
E. P. Xing, K.A. Song, M. I. Jordan, and Y. W. Teh.
Proceedings of the 23rd International Conference on Machine
Learning (ICML), 2006.

Statistical debugging: Simultaneous identification of multiple bugs.
A. Zheng, M. I. Jordan, B. Liblit, M. Nayur, and A. Aiken.
Proceedings of the 23rd International Conference on Machine
Learning (ICML), 2006.

A statistical graphical model for predicting protein molecular function.
B. Engelhardt, M. I. Jordan, and S. Brenner.
Proceedings of the 23rd International Conference on Machine
Learning (ICML), 2006.

Word alignment via quadratic assignment.
S. LacosteJulien, B. Taskar, D. Klein, and M. I. Jordan.
Proceedings of the North American Chapter of the Association
for Computational Linguistics Annual Meeting (HLTNAACL), 2006.

Bayesian multicategory support vector machines.
Z. Zhang, and M. I. Jordan.
In Uncertainty in Artificial Intelligence (UAI),
Proceedings of the TwentySecond Conference, 2006.

On optimal quantization rules for sequential decision problems.
X. Nguyen, M. J. Wainwright and M. I. Jordan.
International Symposium on Information Theory (ISIT),
Seattle, WA, 2006.
[Long version].

Advanced tools for operators at Amazon.com.
P. Bodik, A. Fox, M. I. Jordan, D. Patterson, A. Banerjee,
R. Jagannathan, T. Su, S. Tenginakai, B. Turner, and J. Ingalls.
First Workshop on Hot Topics in Autonomic Computing (HotAC),
Dublin, Ireland, 2006.

Comment on 'Support vector machines with applications'.
P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe.
Statistical Science, 21, 341346, 2006.

Innetwork PCA and anomaly detection.
L. Huang, X. Nguyen, M. Garofalakis, M. I. Jordan, A. Joseph, and N. Taft.
In B. Schoelkopf, J. Platt and T. Hofmann (Eds.),
Advances in Neural Information Processing Systems (NIPS) 19, 2006.
[Long version].
2005

Dirichlet processes, Chinese restaurant processes and all that.
M. I. Jordan. Tutorial presentation at the NIPS Conference, 2005.

Subtree power analysis and species selection for comparative genomics.
J. D. McAuliffe, M. I. Jordan, and L. Pachter.
Proceedings of the National Academy of Sciences, 102, 79007905, 2005.

Variational inference for Dirichlet process mixtures.
D. M. Blei and M. I. Jordan.
Bayesian Analysis, 1, 121144, 2005.

Protein function prediction by Bayesian phylogenomics.
B. E. Engelhardt, M. I. Jordan, K. E. Muratore, and S. E. Brenner.
PLoS Computational Biology, e45, 2005.

Nonparametric decentralized detection using kernel methods.
X. Nguyen, M. J. Wainwright, and M. I. Jordan.
IEEE Transactions on Signal Processing, 53, 40534066, 2005.

Genomewide requirements for resistance to functionally distinct
DNAdamaging agents.
L. William, R. P. St. Onge, M. Proctor, P. Flaherty, M. I. Jordan,
A. P. Arkin, R. W. Davis, C. Nislow, and G. Giaever.
PLoS Genetics, 1, 235246, 2005.

A kernelbased learning approach to ad hoc sensor network localization.
X. Nguyen, M. I. Jordan, and B. Sinopoli.
ACM Transactions on Sensor Networks, 1, 134152, 2005.

Sulfur and nitrogen limitation in Escherichia coli K12:
specific homeostatic responses.
P. Gyaneshwar, O. Paliy, J. McAuliffe, A. Jones, M. I. Jordan, and S. Kustu.
Journal of Bacteriology, 187, 10741090, 2005.

A latent variable model for chemogenomic profiling.
P. Flaherty, G. Giaever, J. Kumm, M. I. Jordan, and A. P. Arkin.
Bioinformatics, 21, 32863293, 2005.

Predictive lowrank decomposition for kernel methods.
F. R. Bach and M. I. Jordan.
Proceedings of the 22nd International Conference on Machine
Learning (ICML), 2005.
[Matlab code]

The DLR hierarchy of approximate inference.
M. RosenZvi, M. I. Jordan, and A. Yuille.
In Uncertainty in Artificial Intelligence (UAI),
Proceedings of the TwentyFirst Conference, 2005.

A variational principle for graphical models.
M. J. Wainwright and M. I. Jordan.
New Directions in Statistical Signal Processing: From Systems to Brain.
Cambridge, MA: MIT Press, 2005.

Scalable statistical bug isolation.
B. Liblit, M. Naik, A. X. Zheng, A. Aiken, and M. I. Jordan.
ACM SIGPLAN Conference on Programming Language Design and
Implementation (PLDI), 2005.
[Software]

A probabilistic interpretation of canonical correlation analysis.
F. R. Bach and M. I. Jordan.
Technical Report 688, Department of Statistics,
University of California, Berkeley, 2005.

Extensions of the informative vector machine.
N. D. Lawrence, J. C. Platt, & M. I. Jordan.
In J. Winkler and N. D. Lawrence and M. Niranjan (Eds.),
Proceedings of the Sheffield Machine Learning Workshop,
Lecture Notes in Computer Science, New York: Springer, 2005.

Discriminative training of Hidden Markov models for multiple
pitch tracking.
F. R. Bach and M. I. Jordan.
Proceedings of the International Conference on Acoustics,
Speech and Signal Processing (ICASSP), 2005.

Multiinstrument musical transcription using a dynamic graphical model.
B. Vogel, M. I. Jordan and D. Wessel.
Proceedings of the International Conference on Acoustics,
Speech and Signal Processing (ICASSP), 2005.

Combining visualization and statistical analysis to improve
operator confidence and efficiency for failure detection
and localization.
P. Bodik, G. Friedman, L. Biewald, H. Levine, G. Candea,
K. Patel, G. Tolle, J. Hui, A. Fox, M. I. Jordan, and D. Patterson.
International Conference on Autonomic Computing (ICAC), 2005.

On information divergence measures, surrogate loss functions and
decentralized hypothesis testing.
X. Nguyen, M. J. Wainwright and M. I. Jordan.
Fortythird Annual Allerton Conference on Communication,
Control, and Computing, UrbanaChampaign, IL, 2005.

Gaussian processes and the nullcategory noise model.
N. D. Lawrence and M. I. Jordan.
In O. Chapelle, B. Schoelkopf & A. Zien (Eds),
SemiSupervised Learning, Cambridge, MA: MIT Press, 2005.

Semiparametric latent factor models.
Y. W. Teh, M. Seeger, and M. I. Jordan.
Proceedings of the Eighth Conference on Artificial Intelligence
and Statistics (AISTATS), 2005.

Robust design of biological experiments.
P. Flaherty, M. I. Jordan and A. P. Arkin.
In Y. Weiss and B. Schoelkopf and J. Platt (Eds.),
Advances in Neural Information Processing Systems
(NIPS) 18, 2005.

Structured prediction via the extragradient method.
B. Taskar, S. LacosteJulien and M. I. Jordan.
In Y. Weiss and B. Schoelkopf and J. Platt (Eds.),
Advances in Neural Information Processing Systems
(NIPS) 18, 2005.
[Long version].

Divergences, surrogate loss functions and experimental design.
X. Nguyen, M. J. Wainwright and M. I. Jordan.
In Y. Weiss and B. Schoelkopf and J. Platt (Eds.),
Advances in Neural Information Processing Systems
(NIPS) 18, 2005,
[Long version].
2004

Graphical models. M. I. Jordan.
Statistical Science (Special Issue on Bayesian Statistics),
19, 140155, 2004.

Multiplesequence functional annotation and the generalized hidden
Markov phylogeny.
J. D. McAuliffe, L. Pachter, and M. I. Jordan.
Bioinformatics, 20, 18501860, 2004.

Learning graphical models for stationary time series.
F. R. Bach and M. I. Jordan.
IEEE Transactions on Signal Processing, 52, 21892199, 2004.

Kalman filtering with intermittent observations.
B. Sinopoli, L. Schenato, M. Franceschetti, K. Poolla,
M. I. Jordan, and S. Sastry.
IEEE Transactions on Automatic Control, 49, 14531464, 2004.

Chemogenomic profiling: Identifying the functional interactions of
small molecules in yeast. G. Giaever, P. Flaherty, J. Kumm,
M. Proctor, D. F. Jaramillo, A. M. Chu, M. I. Jordan, A. P. Arkin,
and R. W. Davis.
Proceedings of the National Academy of Sciences, 3, 793798, 2004.

A statistical framework for genomic data fusion.
G. R. G. Lanckriet, T. De Bie, N. Cristianini, M. I. Jordan,
and W. S. Noble. Bioinformatics, 20, 26262635, 2004.

Learning the kernel matrix with semidefinite programming.
G. R. G. Lanckriet, N. Cristianini, L. El Ghaoui, P. L. Bartlett, and M. I. Jordan.
Journal of Machine Learning Research, 5, 2772, 2004.

Dimensionality reduction for supervised learning with reproducing kernel
Hilbert spaces.
K. Fukumizu, F. R. Bach, and M. I. Jordan.
Journal of Machine Learning Research, 5, 7379, 2004.

Robust sparse hyperplane classifiers: application to uncertain
molecular profiling data.
C. Bhattacharyya, L. R. Grate, M. I. Jordan, L. El Ghaoui, and
Mian, I. S.
Journal of Computational Biology, 11, 10731089, 2004.
[Software]

Discussion of boosting.
P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe.
Annals of Statistics, 32, 8591, 2004.

LOGOS: A modular Bayesian model for de novo motif detection.
E. P. Xing, W. Wu, M. I. Jordan, and R. M. Karp.
Journal of Bioinformatics and Computational Biology, 2,
127154, 2004.

Treewidthbased conditions for exactness of the SheraliAdams
and Lasserre relaxations.
M. J. Wainwright and M. I. Jordan.
Technical Report 671, Department of Statistics,
University of California, Berkeley, 2004.

Multiple kernel learning, conic duality, and the SMO algorithm.
F. R. Bach, G. R. G. Lanckriet, and M. I. Jordan.
Proceedings of the 21st International Conference on Machine
Learning (ICML), 2004.
[Long version].
[Software].
[ICML Test of Time Award].

Bayesian haplotype inference via the Dirichlet process.
E. P. Xing, R. Sharan, and M. I. Jordan.
Proceedings of the 21st International Conference on Machine
Learning (ICML), 2004.

Decentralized detection and classification using kernel methods.
X. Nguyen, M. J. Wainwright, and M. I. Jordan.
Proceedings of the 21st International Conference on Machine
Learning (ICML), 2004.
[Best Paper Award].

Variational methods for the Dirichlet process.
D. M. Blei and M. I. Jordan.
Proceedings of the 21st International Conference on Machine
Learning (ICML), 2004.
[Long version].

Sparse Gaussian process classification with multiple classes.
M. Seeger and M. I. Jordan.
Technical Report 661, Department of Statistics,
University of California, Berkeley, 2004.

Graph partition strategies for generalized mean field inference.
E. P. Xing, M. I. Jordan, and S. Russell.
In Uncertainty in Artificial Intelligence (UAI),
Proceedings of the Twentieth Conference, 2004.

Kernelbased data fusion and its application to protein function prediction in yeast.
G. R. G. Lanckriet, M. Deng, N. Cristianini, M. I. Jordan, and W. S. Noble.
Pacific Symposium on Biocomputing (PSB), 2004.
[Supplementary information].

Combining statistical monitoring and predictable recovery for
selfmanagement.
A. Fox, E. Kiciman, D. A. Patterson, R. H. Katz and M. I. Jordan.
ACM SIGSOFT Proceedings of the Workshop on SelfManaged Systems
(WOSS), 2004.

Public deployment of cooperative bug isolation.
B. Liblit, A. Aiken, A. X. Zheng, and M. I. Jordan.
Workshop on Remote Analysis and
Measurement of Software Systems (RAMSS), 2004.

Failure diagnosis using decision trees.
M. Chen, A. X. Zheng, J. Lloyd, M. I. Jordan, and E. Brewer.
International Conference on Autonomic Computing (ICAC), 2004.

Sharing clusters among related groups: Hierarchical Dirichlet processes.
Y. W. Teh, M. I. Jordan, M. J. Beal and D. M. Blei.
In L. Saul, Y. Weiss, and L. Bottou (Eds.),
Advances in Neural Information Processing Systems (NIPS) 17, 2004.
[Long version].
[Software]

Blind onemicrophone speech separation: A spectral learning approach.
F. R. Bach and M. I. Jordan.
In L. Saul, Y. Weiss, and L. Bottou (Eds.),
Advances in Neural Information Processing Systems (NIPS) 17, 2004.

A direct formulation for sparse PCA using semidefinite programming.
A. d'Aspremont, L. El Ghaoui, M. I. Jordan, and G. R. G. Lanckriet.
In L. Saul, Y. Weiss, and L. Bottou (Eds.),
Advances in Neural Information Processing Systems (NIPS) 17, 2004.

Semisupervised learning via Gaussian processes.
N. D. Lawrence and M. I. Jordan.
In L. Saul, Y. Weiss, and L. Bottou (Eds.),
Advances in Neural Information Processing Systems (NIPS) 17, 2004.

Computing regularization paths for learning multiple kernels.
F. R. Bach, R. Thibaux, and M. I. Jordan.
In L. Saul, Y. Weiss, and L. Bottou (Eds.),
Advances in Neural Information Processing Systems (NIPS) 17, 2004.
[Matlab code]
2003

Latent Dirichlet allocation.
D. M. Blei, A. Y. Ng, and M. I. Jordan.
Journal of Machine Learning Research, 3, 9931022, 2003.
[software].

Toward a protein profile of Escherichia coli: Comparison to its transcription
profile.
R. W. Corbin, O. Paliy, F. Yang, J. Shabanowitz, M. Platt, C. E. Lyons,
Jr., K. Root, J. D. McAuliffe, M. I. Jordan, S. Kustu, E. Soupene, and D. F. Hunt.
Proceedings of the National Academy of Sciences, 100, 92329237, 2003.

Beyond independent components: Trees and clusters.
F. R. Bach and M. I. Jordan.
Journal of Machine Learning Research, 4, 12051233, 2003.
[Matlab code]

Matching words and pictures.
K. Barnard, P. Duygulu, N. de Freitas, D. A. Forsyth, D. M. Blei, and M. I. Jordan.
Journal of Machine Learning Research, 3, 11071135, 2003.

Hierarchical Bayesian models for applications in information retrieval.
D. M. Blei, M. I. Jordan and A. Y. Ng.
In: J. M. Bernardo, M. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman,
A. F. M. Smith, and M. West (Eds.), Bayesian Statistics 7, 2003.

Simultaneous relevant feature identification and classification
in highdimensional spaces: Application to molecular profiling data.
C. Bhattacharyya, L. R. Grate, A. Rizki, D. Radisky, F. J. Molina,
M. I. Jordan, M. J. Bissell, and I. S. Mian. Signal Processing,
83, 729743, 2003.

An introduction to MCMC for machine learning.
C. Andrieu, N. de Freitas, A. Doucet and M. I. Jordan.
Machine Learning, 50, 543, 2003.

Modeling annotated data.
D. M. Blei and M. I. Jordan.
26th International Conference on Research and Development
in Information Retrieval (SIGIR), New York: ACM Press, 2003.
[SIGIR Test of Time Honorable Mention].

Bug isolation via remote program sampling.
B. Liblit, A. Aiken, A. X. Zheng, and M. I. Jordan.
ACM SIGPLAN 2003 Conference on Programming
Language Design and Implementation (PLDI), San Diego, 2003.

Variational inference in graphical models: The view from the marginal
polytope. M. J. Wainwright and M. I. Jordan. Fortyfirst Annual
Allerton Conference on Communication, Control, and Computing,
UrbanaChampaign, IL, 2003.

Kernelbased integration of genomic data using semidefinite programming.
G. R. G. Lanckriet, N. Cristianini, M. I. Jordan, and W. S. Noble.
In B. Schoelkopf, K. Tsuda and JP. Vert (Eds.), Kernel Methods
in Computational Biology, Cambridge, MA: MIT Press, 2003.

On semidefinite relaxation for normalized kcut and connections to spectral clustering.
E. P. Xing and M. I. Jordan.
Technical Report CSD031265, Computer Science Division,
University of California, Berkeley, 2003.

Support vector machines for analog circuit performance representation.
F. De Bernardinis, M. I. Jordan, and A. L. SangiovanniVincentelli.
Proceedings of the Design Automation Conference (DAC), 2003.

Semidefinite relaxations for approximate inference on graphs with cycles.
M. J. Wainwright and M. I. Jordan.
In S. Thrun, L. Saul, and B. Schoelkopf (Eds.),
Advances in Neural Information Processing Systems (NIPS) 16,
(long version), 2003.

Learning spectral clustering.
F. R. Bach and M. I. Jordan.
In S. Thrun, L. Saul, and B. Schoelkopf (Eds.),
Advances in Neural Information Processing Systems (NIPS) 16,
(long version), 2003.

Hierarchical topic models and the nested Chinese restaurant process.
D. M. Blei, T. Griffiths, M. I. Jordan, and J. Tenenbaum.
In S. Thrun, L. Saul, and B. Schoelkopf (Eds.),
Advances in Neural Information Processing Systems (NIPS) 16, 2003.

Kernel dimensionality reduction for supervised learning.
K. Fukumizu, F. R. Bach, and M. I. Jordan.
In S. Thrun, L. Saul, and B. Schoelkopf (Eds.),
Advances in Neural Information Processing Systems (NIPS) 16, 2003.

Large margin classifiers: convex loss, low noise, and convergence rates.
P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe.
In S. Thrun, L. Saul, and B. Schoelkopf (Eds.),
Advances in Neural Information Processing Systems (NIPS) 16, 2003.

On the concentration of expectation and approximate inference in layered
Bayesian networks. X. Nguyen and M. I. Jordan.
In S. Thrun, L. Saul, and B. Schoelkopf (Eds.),
Advances in Neural Information Processing Systems (NIPS) 16,
(long version), 2003.

Statistical debugging of sampled programs.
A. X. Zheng, M. I. Jordan, B. Liblit, and A. Aiken.
In S. Thrun, L. Saul, and B. Schoelkopf (Eds.),
Advances in Neural Information Processing Systems (NIPS) 16, 2003.

Autonomous helicopter flight via reinforcement learning.
A. Y. Ng, H. J. Kim, M. I. Jordan, and S. Sastry.
In S. Thrun, L. Saul, and B. Schoelkopf (Eds.),
Advances in Neural Information Processing Systems (NIPS) 16, 2003.

A generalized mean field algorithm for variational inference in
exponential families.
E. P. Xing, M. I. Jordan, and S. Russell.
In C. Meek and U. Kjaerulff,
Uncertainty in Artificial Intelligence (UAI),
Proceedings of the Eighteenth Conference, 2002.

Kernel independent component analysis.
F. R. Bach and M. I. Jordan. International Conference on Acoustics,
Speech, and Signal Processing (ICASSP), 2002,
[Long version].
[Matlab code]

Kalman filtering with intermittent observations.
B. Sinopoli, L. Schenato, M. Franceschetti, K. Poolla,
M. I. Jordan, and S. Sastry.
42nd IEEE Conference on Decision and Control (CDC), 2004.

Integrated analysis of transcript profiling and protein sequence data.
L. R. Grate, C. Bhattacharyya, M. I. Jordan, and I. S. Mian.
Mechanisms of Ageing and Development, 124, 109114, 2003.

Finding clusters in independent component analysis.
F. R. Bach and M. I. Jordan.
Fourth International Symposium on Independent Component Analysis
and Blind Signal Separation (ICA), 2003.

Sampling user executions for bug isolation.
B. Liblit, A. Aiken, A. X. Zheng, and M. I. Jordan.
Workshop on Remote Analysis and
Measurement of Software Systems (RAMSS), 2003.

LOGOS: A modular Bayesian model for de novo motif detection.
E. P. Xing, W. Wu, M. I. Jordan, and R. M. Karp.
IEEE Computer Society Bioinformatics Conference (CSB), 2004.
2002

Kernel independent component analysis.
F. R. Bach and M. I. Jordan. Journal of Machine Learning Research, 3, 148, 2002.
[Matlab code]

Optimal feedback control as a theory of motor coordination.
E. Todorov and M. I. Jordan. Nature Neuroscience, 5, 12261235, 2002.
[Supplementary information].
[News and views].

A robust minimax approach to classification.
G. R. G. Lanckriet, L. El Ghaoui, C. Bhattacharyya, and M. I. Jordan.
Journal of Machine Learning Research, 3, 552582, 2002.
[Matlab code]

Sensorimotor adaptation of speech I: Compensation and adaptation.
J. F. Houde and M. I. Jordan. Journal of Speech, Language,
and Hearing Research, 45, 239262, 2002.

Graphical models: Probabilistic inference.
M. I. Jordan and Y. Weiss. In M. Arbib (Ed.),
The Handbook of Brain Theory and Neural Networks, 2nd edition.
Cambridge, MA: MIT Press, 2002.

Loopy belief propagation and Gibbs measures.
S. Tatikonda and M. I. Jordan.
In D. Koller and A. Darwiche (Eds)., Uncertainty in Artificial Intelligence (UAI),
Proceedings of the Eighteenth Conference, 2002.

Treedependent component analysis.
F. R. Bach and M. I. Jordan.
In D. Koller and A. Darwiche (Eds)., Uncertainty in Artificial Intelligence (UAI),
Proceedings of the Eighteenth Conference, 2002.
[Matlab code]

Random sampling of a continuoustime stochastic dynamical system.
M. Micheli and M. I. Jordan.
Proceedings of the Fifteenth International Symposium on Mathematical Theory
of Networks and Systems, 2002.

Learning the kernel matrix with semidefinite programming.
G. R. G. Lanckriet, P. L. Bartlett, N. Cristianini, L. El Ghaoui, and M. I. Jordan.
Machine Learning: Proceedings of the Nineteenth International Conference
(ICML),
San Mateo, CA: Morgan Kaufmann, 2002.

Learning graphical models with Mercer kernels.
F. R. Bach and M. I. Jordan.
In S. Becker, S. Thrun, and K. Obermayer (Eds.),
Advances in Neural Information Processing Systems (NIPS) 15, 2002.

Robust novelty detection with singleclass MPM.
G. R. G. Lanckriet, L. El Ghaoui, and M. I. Jordan.
In S. Becker, S. Thrun, and K. Obermayer (Eds.),
Advances in Neural Information Processing Systems (NIPS) 15, 2002.

A minimal intervention principle for coordinated movement.
E. Todorov and M. I. Jordan.
In S. Becker, S. Thrun, and K. Obermayer (Eds.),
Advances in Neural Information Processing Systems (NIPS) 15, 2002.

Distance metric learning, with application to clustering with sideinformation.
E. P. Xing, A. Y. Ng, M. I. Jordan and S. Russell.
In S. Becker, S. Thrun, and K. Obermayer (Eds.),
Advances in Neural Information Processing Systems (NIPS) 15, 2002.

A hierarchical Bayesian Markovian model for motifs in biopolymer sequences.
E. P. Xing, M. I. Jordan, R. M. Karp and S. Russell.
In S. Becker, S. Thrun, and K. Obermayer (Eds.),
Advances in Neural Information Processing Systems (NIPS) 15, 2002.

Simultaneous relevant feature identification and classification
in highdimensional spaces.
L. R. Grate, C. Bhattacharyya, M. I. Jordan and I. S. Mian.
Workshop on Algorithms in Bioinformatics, 2002.
[matlab code],
[perl/lp_solve code].

Learning in modular and hierarchical systems.
M. I. Jordan and R. A. Jacobs. In M. Arbib (Ed.),
The Handbook of Brain Theory and Neural Networks, 2nd edition.
Cambridge, MA: MIT Press, 2002.
2001

Stable algorithms for link analysis.
A. Y. Ng, A. X. Zheng, and M. I. Jordan. Proceedings of the
24th International Conference on Research and Development
in Information Retrieval (SIGIR), New York, NY: ACM Press, 2001.

Efficient stepwise selection in decomposable models.
A. Deshpande, M. N. Garofalakis, and M. I. Jordan.
In J. Breese and D. Koller (Ed)., Uncertainty in Artificial Intelligence (UAI),
Proceedings of the Seventeenth Conference, 2001.

Convergence rates of the Voting Gibbs classifier, with application
to Bayesian feature selection.
A. Y. Ng and M. I. Jordan. Machine Learning: Proceedings of the
Eighteenth International Conference, San Mateo, CA: Morgan Kaufmann, 2001.

Link analysis, eigenvectors, and stability.
A. Y. Ng, A. X. Zheng, and M. I. Jordan.
International Joint Conference on Artificial Intelligence (IJCAI), 2001.

Variational MCMC.
N. de Freitas, P. HøjenSørensen, M. I. Jordan, and S. Russell.
In J. Breese and D. Koller (Ed)., Uncertainty in Artificial Intelligence (UAI),
Proceedings of the Seventeenth Conference, 2001.

Feature selection for highdimensional genomic microarray data.
E. P. Xing, M. I. Jordan, and R. M. Karp. Machine Learning: Proceedings
of the Eighteenth International Conference, San Mateo, CA: Morgan Kaufmann,
2001.

Thin junction trees.
F. R. Bach and M. I. Jordan.
In T. Dietterich, S. Becker and Z. Ghahramani (Eds.),
Advances in Neural Information Processing Systems (NIPS) 14, 2001.

On spectral clustering: Analysis and an algorithm.
A. Y. Ng, M. I. Jordan, and Y. Weiss.
In T. Dietterich, S. Becker and Z. Ghahramani
(Eds.), Advances in Neural Information Processing Systems (NIPS) 14, 2001.

Minimax probability machine.
G. R. G. Lanckriet, L. El Ghaoui, C. Bhattacharyya, and M. I. Jordan.
In T. Dietterich, S. Becker and Z. Ghahramani
(Eds.), Advances in Neural Information Processing Systems (NIPS) 14, 2001.

On discriminative vs. generative classifiers: A comparison of logistic
regression and naive Bayes.
A. Y. Ng and M. I. Jordan.
In T. Dietterich, S. Becker and Z. Ghahramani
(Eds.), Advances in Neural Information Processing Systems (NIPS) 14, 2001.

Latent Dirichlet allocation.
D. M. Blei, A. Y. Ng and M. I. Jordan.
In T. Dietterich, S. Becker and Z. Ghahramani
(Eds.), Advances in Neural Information Processing Systems (NIPS) 14, 2001,
[Long version],
[software].

Discorsi sulle reti neurali e l'apprendimento.
C. Domeniconi and M. I. Jordan. Milan: Edizioni Franco Angeli, 2001.
2000

Learning with mixtures of trees.
M. Meila and M. I. Jordan.
Journal of Machine Learning Research, 1, 148, 2000.

Attractor dynamics for feedforward neural networks.
L. K. Saul and M. I. Jordan. Neural Computation, 12, 13131335, 2000.

Bayesian logistic regression: a variational approach.
T. S. Jaakkola and M. I. Jordan. Statistics and Computing, 10, 2537, 2000.

Asymptotic convergence rate of the EM algorithm for gaussian mixtures.
J. Ma, L. Xu, and M. I. Jordan.
Neural Computation, 12, 2881290, 2000.

PEGASUS: A policy search method for large MDPs and POMDPs.
A. Y. Ng and M. I. Jordan.
Uncertainty in Artificial Intelligence (UAI),
Proceedings of the Sixteenth Conference, 2000.
1999

Mixed memory Markov models: Decomposing complex stochastic processes
as mixture of simpler ones.
L. K. Saul and M. I. Jordan.
Machine Learning, 37, 7587, 1999.

Variational probabilistic inference and the QMRDT network.
T. S. Jaakkola and M. I. Jordan. Journal of Artificial Intelligence
Research, 10, 291322, 1999.

Are reaching movements planned to be straight and invariant in
the extrinsic space?
M. Desmurget, C. Prablanc, M. I. Jordan, and M. Jeannerod, M.
Quarterly Journal of Experimental Psychology, 52, 9811020, 1999.

Loopy beliefpropagation for approximate inference: An empirical study.
K. Murphy, Y. Weiss, and M. I. Jordan.
In K. B. Laskey and H. Prade (Eds.), Uncertainty in Artificial Intelligence (UAI),
Proceedings of the Fifteenth Conference, San Mateo, CA: Morgan Kaufmann, 1999.

Approximate inference algorithms for twolayer Bayesian networks.
A. Y. Ng and M. I. Jordan. Advances in Neural Information Processing
Systems (NIPS) 12, Cambridge MA: MIT Press, 1999.

An introduction to variational methods for graphical models.
M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul.
In M. I. Jordan (Ed.), Learning in Graphical Models,
Cambridge: MIT Press, 1999.

Computational motor control.
M. I. Jordan and D. M. Wolpert.
In M. Gazzaniga (Ed.), The Cognitive Neurosciences, 2nd edition,
Cambridge: MIT Press, 1999.

Improving the mean field approximation via the use of mixture
distributions.
T. S. Jaakkola and M. I. Jordan.
In M. I. Jordan (Ed.), Learning in Graphical Models,
Cambridge: MIT Press, 1999.

Learning in graphical models.
M. I. Jordan (Ed.),
Cambridge MA: MIT Press, 1999.

Recurrent networks.
M. I. Jordan.
In R. A. Wilson and F. C. Keil (Eds.),
The MIT Encyclopedia of the Cognitive Sciences,
Cambridge, MA: MIT Press, 1999.

Neural networks.
M. I. Jordan.
In R. A. Wilson and F. C. Keil (Eds.),
The MIT Encyclopedia of the Cognitive Sciences,
Cambridge, MA: MIT Press, 1999.

Computational intelligence.
M. I. Jordan, and S. Russell
In R. A. Wilson and F. C. Keil (Eds.),
The MIT Encyclopedia of the Cognitive Sciences,
Cambridge, MA: MIT Press, 1999.
1998

Adaptation in speech production.
J. Houde and M. I. Jordan.
Science, 279, 12131216, 1998.

Smoothness maximization along a predefined path accurately
predicts the speed profiles of complex arm movements.
E. Todorov and M. I. Jordan.
Journal of Neurophysiology, 80, 696714, 1998.

The role of inertial sensitivity in motor planning.
P. N. Sabes, M. I. Jordan and D. M. Wolpert.
Journal of Neuroscience, 18, 59485959, 1998.

Learning from dyadic data.
T. Hofmann, J. Puzicha, and M. I. Jordan.
In Kearns, M. S., Solla, S. A., and Cohn, D. (Eds.),
Advances in Neural Information Processing Systems (NIPS) 11,
Cambridge MA: MIT Press, 1998.

Mixture representations for inference and learning in Boltzmann machines.
N. D. Lawrence, C. M. Bishop and M. I. Jordan.
In G. F. Cooper and S. Moral (Eds.), Uncertainty in Artificial
Intelligence (UAI), Proceedings of the Fourteenth Conference,
San Mateo, CA: Morgan Kaufman, 1998.
1997

Factorial hidden Markov models.
Z. Ghahramani and M. I. Jordan.
Machine Learning, 29, 245273, 1997.

Obstacle avoidance and a perturbation sensitivity model for
motor planning.
P. N. Sabes and M. I. Jordan.
Journal of Neuroscience, 17, 71197128, 1997.

Probabilistic independence networks for hidden Markov probability
models.
P. Smyth, D. Heckerman, and M. I. Jordan.
Neural Computation, 9, 227270, 1997.

Viewing the hand prior to movement improves accuracy of pointing performed
toward the unseen contralateral hand.
M. Desmurget, Y. Rossetti, M. I. Jordan, C. Meckler, and C. Prablanc.
Experimental Brain Research, 115, 180186, 1997.

Constrained and unconstrained movements involve different control strategies.
M. Desmurget, M. I. Jordan, C. Prablanc, and M. Jeannerod.
Journal of Neurophysiology, 77, 16441650, 1997.

Approximating posterior distributions in belief networks using mixtures.
C. M. Bishop, N. D. Lawrence, T. S. Jaakkola, and M. I. Jordan.
In Jordan, M. I., Kearns, M. J. and Solla, S. A. (Eds.),
Advances in Neural Information Processing Systems (NIPS) 10,
Cambridge, MA: MIT Press, 1997.

Estimating dependency structure as a hidden variable.
M. Meila and M. I. Jordan.
In Jordan, M. I., Kearns, M. J. and Solla, S. A. (Eds.),
Advances in Neural Information Processing Systems (NIPS) 10,
Cambridge, MA: MIT Press, 1997.

Advances in neural information processing systems 10,
M. I. Jordan, M. J. Kearns, and S. A. Solla, (Eds.),
Cambridge MA: MIT Press, 1997.

Adaptation in speech motor control.
J. F. Houde and M. I. Jordan.
In Jordan, M. I., Kearns, M. J. and Solla, S. A. (Eds.),
Advances in Neural Information Processing Systems (NIPS) 10,
Cambridge, MA: MIT Press, 1997.

Neural networks.
M. I. Jordan and C. Bishop.
In Tucker, A. B. (Ed.), CRC Handbook of Computer Science,
Boca Raton, FL: CRC Press, 1997.

Computational models of sensorimotor organization.
Z. Ghahramani, D. M. Wolpert, and M. I. Jordan.
In P. Morasso and V. Sanguineti (Eds.),
SelfOrganization Computational Maps and Motor Control,
Amsterdam: NorthHolland, 1997.

Advances in neural information processing systems 9,
M. Mozer, M. I. Jordan, and T. Petsche, (Eds.),
Cambridge MA: MIT Press, 1997.

Mixture models for learning from incomplete data.
Z. Ghahramani and M. I. Jordan.
In Greiner, R., Petsche, T., and Hanson, S. J. (Eds.),
Computational Learning Theory and Natural Learning Systems,
Cambridge, MA: MIT Press, 1997.

Active learning with statistical models.
D. Cohn, Z. Ghahramani, and M. I. Jordan.
In MurraySmith, R., and Johansen, T. A. (Eds.),
Multiple Model Approaches to Modelling and Control,
London: Taylor and Francis, 1997.

An objective function for belief net triangulation.
M. Meila and M. I. Jordan.
In D. Madigan and P. Smyth (Eds.),
Proceedings of the 1997 Conference on Artificial Intelligence and Statistics,
Ft. Lauderdale, FL, 1997.

Markov mixtures of experts.
M. Meila and M. I. Jordan.
In MurraySmith, R., and Johansen, T. A. (Eds.),
Multiple Model Approaches to Modelling and Control,
London: Taylor and Francis, 1997.

Serial order: A parallel, distributed processing approach.
M. I. Jordan.
In J. W. Donahoe and V. P. Dorsel, (Eds.).
Neuralnetwork Models of Cognition: Biobehavioral Foundations,
Amsterdam: Elsevier Science Press, 1997.
1996

Mean field theory for sigmoid belief networks.
L. K. Saul, T. Jaakkola, and M. I. Jordan.
Journal of Artificial Intelligence Research, 4, 6176, 1996.

Generalization to local remappings of the visuomotor coordinate
representation.
Z. Ghahramani, D. M. Wolpert, and M. I. Jordan.
Journal of Neuroscience, 16, 70857096, 1996.

Active learning with statistical models.
D. Cohn, Z. Ghahramani, and M. I. Jordan.
Journal of Artificial Intelligence Research, 4, 129145, 1996.

On convergence properties of the EM Algorithm for Gaussian mixtures.
L. Xu and M. I. Jordan. Neural Computation, 8, 129151, 1996.

Local linear perceptrons for classification.
E. Alpaydin and M. I. Jordan.
IEEE Transactions on Neural Networks, 7, 788792, 1996.

Computational aspects of motor control and motor learning.
M. I. Jordan.
In H. Heuer and S. Keele (Eds.), Handbook of Perception and Action:
Motor Skills, New York: Academic Press, 1996.

Optimal triangulation with continuous cost functions.
M. Meila and M. I. Jordan. In M. C. Mozer, M. I. Jordan,
and T. Petsche (Eds.), Advances in Neural Information
Processing Systems (NIPS) 9, Cambridge MA: MIT Press, 1996.

A variational principle for modelbased interpolation.
L. K. Saul and M. I. Jordan.
In M. C. Mozer, M. I. Jordan, and T. Petsche
(Eds.), Advances in Neural Information Processing Systems (NIPS) 9, Cambridge MA:
MIT Press, 1996.

Recursive algorithms for approximating probabilities in graphical
models.
T. S. Jaakkola and M. I. Jordan.
In M. C. Mozer, M. I. Jordan, and T. Petsche
(Eds.), Advances in Neural Information Processing Systems (NIPS) 9, Cambridge MA:
MIT Press, 1996.

Hidden Markov decision trees.
M. I. Jordan, Z. Ghahramani,
and L. K. Saul. In M. C. Mozer, M. I. Jordan, and T. Petsche (Eds.),
Advances in Neural Information Processing Systems (NIPS) 9, Cambridge MA: MIT Press,
1996.

Computing upper and lower bounds on likelihoods in intractable
networks.
T. S. Jaakkola and M. I. Jordan.
In E. Horvitz (Ed.), Uncertainty in Artificial Intelligence (UAI),
Proceedings of the Twelth Conference,
Portland, Oregon, 1996.
1995

An internal forward model for sensorimotor integration.
D. M. Wolpert, Z. Ghahramani, and M. I. Jordan.
Science, 269, 18801882, 1995.

Are arm trajectories planned in kinematic or dynamic coordinates?
An adaptation study.
D. M. Wolpert, Z. Ghahramani, and M. I. Jordan.
Experimental Brain Research, 103, 460470, 1995.

Convergence results for the EM approach to mixtures of experts
architectures.
M. I. Jordan and L. Xu.
Neural Networks, 8, 14091431, 1995.

The organization of action sequences: Evidence from a relearning task.
M. I. Jordan.
Journal of Motor Behavior, 27, 179192, 1995.

Adaptation in speech production to transformed auditory feedback.
J. Houde and M. I. Jordan.
Journal of the Acoustical Society of America, 97, 3243.

Fast learning by bounding likelihoods in sigmoid belief networks.
T. S. Jaakkola, L. K. Saul, and M. I. Jordan.
In D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo (Eds.),
Advances in Neural Information Processing Systems (NIPS) 8,
Cambridge MA: MIT Press, 1995.

Reinforcement learning by probability matching.
P. N. Sabes and M. I. Jordan.
In D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo (Eds.),
Advances in Neural Information Processing Systems (NIPS) 8,
Cambridge MA: MIT Press, 1995.

Exploiting tractable substructures in intractable networks.
L. K. Saul and M. I. Jordan.
In D. Touretzky, M. Mozer, and M. Hasselmo (Eds.), Advances in Neural
Information Processing Systems (NIPS) 8, MIT Press, 1995.

Markov mixtures of experts.
M. Meila and M. I. Jordan.
In D. Touretzky, M. Mozer, and M. Hasselmo (Eds.), Advances in Neural
Information Processing Systems (NIPS) 8, MIT Press, 1995.

Factorial Hidden Markov models.
Z. Ghahramani and M. I. Jordan.
In D. Touretzky, M. Mozer, and M. Hasselmo (Eds.), Advances in Neural
Information Processing Systems (NIPS) 8, MIT Press, 1995.

Learning in modular and hierarchical systems.
M. I. Jordan and R. A. Jacobs. In M. Arbib (Ed.),
The Handbook of Brain Theory and Neural Networks,
Cambridge, MA: MIT Press, 1995.

Why the logistic function? A tutorial discussion on probabilities
and neural networks.
M. I. Jordan.
MIT Computational Cognitive Science Report 9503, August 1995.

The moving basin: Effective actionsearch in adaptive control.
W. Fun and M. I. Jordan, M. I.
Proceedings of the World Conference on Neural Networks,
Washington, DC, 1995.

Goalbased speech motor control: A theoretical framework
and some preliminary data.
J. S. Perkell, M. L. Matthies, M. A. Svirsky, and M. I. Jordan.
In D. A. Robin, K. M. Yorkston, and D. R. Beukelman (Eds.),
Disorders of Motor Speech: Assessment, Treatment, and Clinical Characterization,
Baltimore, MD: Brookes Publishing Co, 1993.
1994

Hierarchical mixtures of experts and the EM algorithm.
M. I. Jordan and R. A. Jacobs. Neural Computation, 6, 181214, 1994.

Learning in Boltzmann trees.
L. K. Saul and M. I. Jordan.
Neural Computation, 6, 11731183, 1994.

Perceptual distortion contributes to the curvature of human
reaching movements.
D. M. Wolpert, Z. Ghahramani, and M. I. Jordan.
Experimental Brain Research, 98, 153156, 1994.

On the convergence of stochastic iterative dynamic programming algorithms.
T. Jaakkola, M. I. Jordan and S. Singh.
Neural Computation, 6, 11831190, 1994.

A model of the learning of arm trajectories from spatial targets.
M. I. Jordan, T. Flash, and Y. Arnon.
Journal of Cognitive Neuroscience, 6, 359376, 1994.

Learning without state estimation in partially observable Markovian decision
processes.
S. P. Singh, T. S. Jaakkola, and M. I. Jordan.
Machine Learning: Proceedings of the Eleventh International Conference,
San Mateo, CA: Morgan Kaufmann, 284292, 1994.

A statistical approach to decision tree modeling.
M. I. Jordan. In M. Warmuth (Ed.), Proceedings of the Seventh
Annual ACM Conference on Computational Learning Theory,
New York: ACM Press, 1994.

Learning from incomplete data.
Z. Ghahramani and M. I. Jordan.
MIT Center for Biological and Computational Learning Technical Report 108, 1994.

Theoretical and experimental studies of convergence properties of
EM algorithm based on finite Gaussian mixtures.
L. Xu and M. I. Jordan, M. I.
Proceedings of the 1994 International Symposium on Artificial Neural Networks,
Tainan, Taiwan, pp. 380385, 1994.

A statistical approach to decision tree modeling.
M. I. Jordan.
In M. Warmuth (Ed.), Proceedings of the Seventh
Annual ACM Conference on Computational Learning Theory,
New York: ACM Press, 1994.

Boltzmann chains and hidden Markov Models.
L. K. Saul and M. I. Jordan. In G. Tesauro, D. S. Touretzky and
T. K. Leen, (Eds.), Advances in Neural Information Processing Systems (NIPS) 7,
MIT Press, 1994.

Reinforcement learning algorithm for partially observable Markov
decision problems.
T. S. Jaakkola, S. P. Singh, and M. I. Jordan.
In G. Tesauro, D. S. Touretzky and T. K. Leen, (Eds.),
Advances in Neural Information Processing Systems (NIPS) 7,
Cambridge, MA: MIT Press, 1994.

Reinforcement learning with soft state aggregation.
S. P. Singh, T. S. Jaakkola, and M. I. Jordan.
In G. Tesauro, D. S. Touretzky and T. K. Leen, (Eds.),
Advances in Neural Information Processing Systems (NIPS) 7,
Cambridge, MA: MIT Press, 1994.

Computational structure of coordinate transformations: A generalization study.
Z. Ghahramani, D. M. Wolpert, and M. I. Jordan.
In G. Tesauro, D. S. Touretzky and T. K. Leen, (Eds.),
Advances in Neural Information Processing Systems (NIPS) 8,
Cambridge, MA: MIT Press, 1994.

Neural forward dynamic models in human motor control: Psychophysical evidence.
D. M. Wolpert, Z. Ghahramani, and M. I. Jordan.
In G. Tesauro, D. S. Touretzky and T. K. Leen, (Eds.),
Advances in Neural Information Processing Systems (NIPS) 7,
Cambridge, MA: MIT Press, 1994.

An alternative model for mixtures of experts.
L. Xu, M. I. Jordan, and G. E. Hinton.
In G. Tesauro, D. S. Touretzky and T. K. Leen, (Eds.),
Advances in Neural Information Processing Systems (NIPS) 7,
Cambridge, MA: MIT Press, 1994.

Active learning with statistical models.
D. Cohn, Z. Ghahramani, and M. I. Jordan.
In G. Tesauro, D. S. Touretzky and T. K. Leen, (Eds.),
Advances in Neural Information Processing Systems (NIPS) 7,
Cambridge, MA: MIT Press, 1994.
pre1994

Forward models: Supervised learning with a distal teacher.
M. I. Jordan and D. E. Rumelhart. Cognitive Science, 16, 307354, 1992.

Adaptive mixtures of local experts.
R. A. Jacobs, M. I. Jordan, S. Nowlan, and G. E. Hinton.
Neural Computation, 3, 112, 1991.

Learning piecewise control strategies in a modular neural network architecture.
R. A. Jacobs and M. I. Jordan.
IEEE Transactions on Systems, Man, and Cybernetics, 23,
337345, 1993.

Trading relations between tonguebody raising and lip rounding in
production of the vowel /u/: A pilot motor equivalence study.
J. S. Perkell, M. L. Matthies, M. A. Svirsky, and M. I. Jordan.
Journal of the Acoustical Society of America, 93, 29482961, 1993.

Supervised learning and divideandconquer: A statistical approach.
M. I. Jordan, and R. A. Jacobs.
In P. E. Utgoff, (Ed.), Machine Learning: Proceedings of
the Tenth International Workshop, San Mateo, CA: Morgan Kaufmann, 1993.

Supervised learning from incomplete data via the EM approach.
Z. Ghahramani and M. I. Jordan.
In Cowan, J., Tesauro, G., and Alspector, J., (Eds.),
Advances in Neural Information Processing Systems 6,
San Mateo, CA: Morgan Kaufmann, 1993.

< A HREF="http://www.cs.berkeley.edu/~jordan/papers/NeuralComputation94.ps">
Convergence of stochastic iterative dynamic programming algorithms.
T. Jaakkola, M. I. Jordan, and S. Singh.
In Cowan, J., Tesauro, G., and Alspector, J., (Eds.),
Advances in Neural Information Processing Systems 6,
San Mateo, CA: Morgan Kaufmann, 1993.

A dynamical model of priming and repetition blindness.
D. Bavelier and M. I. Jordan.
In Hanson, S. J., Cowan, J. D., and Giles, C. L., (Eds.),
Advances in Neural Information Processing Systems (NIPS) 5,
San Mateo, CA: Morgan Kaufmann, 1992.

EM learning of a generalized finite mixture model for combining
multiple classifiers.
L. Xu and M. I. Jordan.
Proceedings of the World Conference on Neural Networks,
Portland, OR, pp. 431434, 1993.

The cascade neural network model and a speedaccuracy tradeoff of arm movement.
M. Hirayama, M. Kawato, and M. I. Jordan.
Journal of Motor Behavior, 25, 162175, 1993.

Constrained supervised learning.
M. I. Jordan.
Journal of Mathematical Psychology, 36, 396425, 1992.

Computational consequences of a bias towards short connections.
R. A. Jacobs and M. I. Jordan.
Journal of Cognitive Neuroscience, 4, 331344, 1992.

Hierarchies of adaptive experts.
M. I. Jordan and R. A. Jacobs.
In J. Moody, S. Hanson, and R. Lippmann (Eds.),
Advances in Neural Information Processing Systems (NIPS) 4,
San Mateo, CA: Morgan Kaufmann, 1991.

Forward dynamics modeling of speech motor control using
physiological data.
M. Hirayama, E. VatikiotisBateson, M. Kawato, and M. I. Jordan.
In J. Moody, S. Hanson, and R. Lippmann (Eds.),
Advances in Neural Information Processing Systems (NIPS) 4,
San Mateo, CA: Morgan Kaufmann, 1991.

Supervised learning and excess degrees of freedom.
Jordan, M. I.
In P. Mehra, and B. Wah, (Eds.),
Artificial Neural Networks: Concepts and Theory,
Los Alamitos, CA: IEEE Computer Society Press, 1992.

Optimal control: A foundation for intelligent control.
D. A. White and M. I. Jordan.
In D. A. White, and D. A. Sofge (Eds.), Handbook of Intelligent Control,
Amsterdam: Van Nostrand, 1992.

Constraints on underspecified target trajectories.
M. I. Jordan.
In P. Dario, G. Sandini, and P. Aebischer, (Eds.),
Robots and Biological Systems: Toward a New Bionics,
Heidelberg: SpringerVerlag, 1992.

A more biologically plausible learning network model for neural networks.
P. Mazzoni, R. Andersen, and M. I. Jordan.
Proceedings of the National Academy of Sciences, 88,
44334437, 1991.

Task decomposition through competition in a modular connectionist
architecture: The what and where vision tasks.
R. A. Jacobs, M. I. Jordan, and A. G. Barto.
Cognitive Science, 15, 219250, 1991.

Internal world models and supervised learning.
M. I. Jordan, and D. E. Rumelhart.
In L. Birnbaum and G. Collins, (Eds.),
Machine Learning: Proceedings of the Eighth International
Workshop, San Mateo, CA: Morgan Kaufmann, pp. 7075, 1991.

A competitive modular connectionist architecture.
R. A. Jacobs and M. I. Jordan.
In D. Touretzky (Ed.), Advances in Neural Information Processing Systems (NIPS) 4,
San Mateo, CA: Morgan Kaufmann, 1991.

Speech motor control model using electromyography.
M. Hirayama, E. VatikiotisBateson, M. Kawato, and M. I. Jordan.
INCN Conference on Speech Communications, 3946, 1991.

A modular connectionist architecture for learning piecewise control strategies.
R. A. Jacobs and M. I. Jordan.
Proceedings of the 1991 American Control Conference,
Boston, MA, pp. 343351, 1991.
[Best Paper Award].

A more biologically plausible learning rule than backpropagation applied
to a network model of cortical area 7a.
P. Mazzoni, R. Andersen, and M. I. Jordan.
Cerebral Cortex, 1, 293307, 1991.

Modularity, supervised learning, and unsupervised learning.
M. I. Jordan, and R. A. Jacobs.
In S. Davis (Ed.), Connectionism: Theory and practice,
Oxford: Oxford University Press, 1991.

A nonempiricist perspective on learning in layered networks.
M. I. Jordan.
Behavioral and Brain Sciences, 13, 497498, 1990.

Simulation of vocalic gestures using an
articulatory model driven by a sequential neural network.
G. Bailly, M. I. Jordan, M. Mantakas, JL. Schwartz, M. Bach,
and O. Olesen.
Journal of the Acoustical Society of America, 87:S105, 1990.

A competitive modular connectionist architecture.
M. I. Jordan, and R. A. Jacobs.
In R. Lippmann and J. Moody and D. Touretzky (Eds.),
Advances in Neural Information Processing Systems (NIPS) 3,
San Mateo, CA: Morgan Kaufmann, pp. 324331, 1990.

ARP learning applied to a network model of cortical area 7a.
P. Mazzoni, R. Andersen, and M. I. Jordan.
Proceedings of the International Joint Conference On Neural Networks,
San Diego, CA, pp. 373379, 1990.

Motor learning and the degrees of freedom problem.
M. I. Jordan.
Attention and Performance, XIII, 796836, 1990.

Learning inverse mappings with forward models.
M. I. Jordan.
In K. S. Narendra (Ed.), Proceedings of the Sixth Yale Workshop
on Adaptive and Learning Systems, New York: Plenum Press, 1990.

Action.
M. I. Jordan, and D. A. Rosenbaum.
In M. I. Posner (Ed.), Foundations of Cognitive Science,
Cambridge, MA: MIT Press, 1989.

Learning to control an unstable system with forward modeling.
M. I. Jordan, and R. A. Jacobs.
In D. Touretzky (Ed.),
Advances in Neural Information Processing Systems (NIPS) 2,
San Mateo, CA: Morgan Kaufmann, pp. 324331, 1989.

Gradient following without backpropagation in layered networks.
A. G. Barto and M. I. Jordan.
Proceedings of the IEEE First Annual International Conference on
Neural Networks,
New York: IEEE Publishing Services, 1987.

An introduction to linear algebra in parallel, distributed processing.
M. I. Jordan.
In D. E. Rumelhart and J. L. McClelland, (Eds.),
Parallel Distributed Processing: Explorations in the Microstructure of Cognition,
Cambridge, MA: MIT Press, 1986.

Attractor dynamics and parallelism in a connectionist sequential machine.
M. I. Jordan.
Proceedings of the Eighth Annual Conference of the Cognitive Science Society,
Englewood Cliffs, NJ: Erlbaum, pp. 531546. [Reprinted in IEEE Tutorials
Series, New York: IEEE Publishing Services, 1990], 1986.
Web Counter: