Mirror Mirror: Crowdsourcing Better Portraits




We describe a method for providing feedback on portrait expressions, and for selecting the most attractive expressions from large video/photo collections. We capture a video of a subject’s face while they are engaged in a task designed to elicit a range of positive emotions. We then use crowdsourcing to score the captured expressions for their attractiveness. We use these scores to train a model that can automatically predict attractiveness of different expressions of a given person. We also train a cross-subject model that evaluates portrait attractiveness of novel subjects and show how it can be used to automatically mine attractive photos from personal photo collections. Furthermore, we show how, with a little bit ($5-worth) of extra crowdsourcing, we can substantially improve the cross-subject model by “fine-tuning” it to a new individual using active learning. Finally, we demonstrate a training app that helps people learn how to mimic their best expressions.

paper thumbnail


SIGGRAPH Asia paper. (pdf, 48MB)

Reduced-size SIGGRAPH Asia paper. (pdf, 2.6MB)


(pptx + videos), 136MB


Jun-Yan Zhu, Aseem Agarwala, Alexei A. Efros, Eli Shechtman and Jue Wang. Mirror Mirror: Crowdsourcing Better Portraits. ACM Transactions on Graphics (SIGGRAPH Asia 2014). December 2014, vol. 33, No. 6.

Additional Materials



Data (898MB) include the original videos, the selected representative frames, attractiveness/seriousness scores estimated from crowdsourced annotation, and extracted HOG features. We also provided MATLAB code to visualize scores and train cross-subject svm models.


We thank Peter O’Donovan for code, Andrew Gallagher for public data, and our subjects for volunteering to be recorded. Figure 1 uses icons by Parmelyn, Dan Hetteix, and Murali Krishna from The Noun Project. The YouTube frames (Figure 16) are courtesy Joshua Michael Shelton.


This research is supported in part by: