Shiry Ginosar - Homepage
me

shiry at eecs dot berkeley dot edu
Google Scholar / Twitter

I am Computing Innovation Posdoctoral Fellow at UC Berkeley, advised by Jitendra Malik, and a Visiting Faculty Researcher at Google Research.

I completed my Ph.D. in Computer Science at UC Berkeley, under the supervision of Alyosha Efros. Prior to joining the Computer Vision group, I was part of Bjoern Hartmann's Human-Computer Interaction lab at Berkeley. Earlier in my career, I was a Visiting Scholar at the CS Department of Carnegie Mellon University, with Luis von Ahn and Manuel Blum in the field of Human Computation. Between my academic roles, I spent four years at Endeca as a Senior Software Engineer. In my distant past, I trained fighter pilots in F-4 Phantom flight simulators as a Staff Sergeant in the Israeli Air Force.

My research has been covered by The New Yorker, The Wall Street Journal, and the Washington Post, amongst others. My work has been featured on PBS NOVA, exhibited at the Israeli Design Museum and is part of the permanent collection of the Deutsches Museum. My patent-pending research work inspired the founding of a startup. I have been named a Rising Star in EECS, and am a recipient of the U.S. National Science Foundation Graduate Research Fellowship, the California Legislature Grant for graduate studies, and the Samuel Silver Memorial Scholarship Award for combining intellectual achievement in science and engineering with serious humanistic and cultural interests.


Selected Publications

  • predicting listener response from speaker text

    Can Language Models Learn to Listen?

    Squeezing lexical semantic "juice" out of large language models.
    Given only the spoken text of a speaker, we synthesize a realistic, synchronous listener. Our text-based model responds in an emotionally-appropriate manner when lexical semantics is crucial. For example, when it is not appropriate to smile despite a speaker's uneasy laughter. Technically, we squeeze out as much semantic "juice" as possible from a pretrained large language model by finetuning it to autoregressively generate realistic 3D listener motion in response to the input transcript.
    Main innovation: We treat atomic gesture elements as novel language tokens easily ingestible by language models. We can then finetune LLMs to synthesize motion by predicting sequences of these elements.

    Evonne Ng*, Sanjay Subramanian*, Dan Klein, Angjoo Kanazawa, Trevor Darrell, and Shiry Ginosar, Can Language Models Learn to Listen?, ICCV 2023. PDF, Video, Project Page


  • predicting listener response from speaker speech

    Learning2Listen

    Learning to respond like a good listener.
    Given a speaker, we synthesize a realistic, synchronous listener. To do this, we learn human interaction 101: the delicate dance of non-verbal communication. We expect good listeners to look us in the eye, synchronize their motion with ours, and mirror our emotions. You can't annotate this! So we must learn from raw data. Technically, we are the first to extend vector-quantization methods to motion synthesis. We show that our novel sequence-encoding VQ-VAE, coupled with a transformer-based prediction mechanism, performs much better than competitive methods for motion generation.

    Evonne Ng, Hanbyul Joo, Liwen Hu, Hao Li, Trevor Darrell, Angjoo Kanazawa, and Shiry Ginosar, Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion, CVPR 2022. PDF, Video, Project Page


  • predicting hand shape from body motion

    Body2Hands

    Learning to Infer 3D Hands from Conversational Gesture Body Dynamics.
    A novel learned deep prior of body motion for 3D hand shape synthesis and estimation in the domain of conversational gestures. Our model builds upon the insight that body motion and hand gestures are strongly correlated in non-verbal communication settings. We formulate the learning of this prior as a prediction task of 3D hand shape given body motion input alone.

    Evonne Ng, Shiry Ginosar, Trevor Darrell and Hanbyul Joo, Body2Hands: Learning to Infer 3D Hands from Conversational Gesture Body Dynamics, CVPR 2021. PDF, Video, Project Page


  • dissertation committee on zoom

    Modeling Visual Minutiae: Gestures, Styles, and Temporal Patterns

    Ph.D. Dissertation.

    Dissertation, Appendix: Dissertation in the Time of Corona


  • modifying sun azimuth

    Learning to Factorize and Relight a City

    Disentangle changing factors from permanent ones.
    We disentangle outdoor scenes into temporally-varying illumination and permanent scene factors. To facilitate training, we assemble a city-scale dataset of outdoor timelapse imagery from Google Street View Time Machine, where the same locations are captured repeatedly through time. Our learned disentangled factors can be used to manipulate novel images in realistic ways, such as changing lighting effects and scene geometry.

    Andrew Liu, Shiry Ginosar, Tinghui Zhou, Alexei A. Efros and Noah Snavely, Learning to Factorize and Relight a City, ECCV 2020. PDF, Video, Project Page

    @inproceedings{Liu2020city,
      author = {Liu, Andrew and Ginosar, Shiry and Zhou, Tinghui and Snavely, Noah and Efros, Alexei A.},
      title = {Learning to Factorize and Relight a City},
      booktitle = {European Conference on Computer Vision (ECCV)},
      year = 2020,
      }


  • speech to gesture translation

    Learning Individual Styles of Conversational Gesture

    Audio to motion translation.
    Human speech is often accompanied by hand and arm gestures. Given audio speech input, we generate plausible gestures to go along with the sound. Specifically, we perform cross-modal translation from ``in-the-wild'' monologue speech of a single speaker to their hand and arm motion. We train on unlabeled videos for which we only have noisy pseudo ground truth from an automatic pose detection system. We release a large video dataset of person-specific gestures.

    Shiry Ginosar*, Amir Bar*, Gefen Kohavi, Caroline Chan, Andrew Owens and Jitendra Malik, Learning Individual Styles of Conversational Gesture, CVPR 2019. PDF, Project Page
    @inproceedings{ginosar2019,
      title={Learning Individual Styles of Conversational Gesture},
      author={Ginosar, Shiry and Bar, Amir and Kohavi, Gefen and Chan, Caroline and Owens, Andrew and Malik, Jitendra},
      booktitle={Computer Vision and Pattern Recognition (CVPR)},
      year={2019}
      }

  • motion retargeting for dance

    Everybody Dance Now!

    "Do as I do" motion transfer.
    Given a source video of a person dancing we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves. We pose this problem as a per-frame image-to-image translation with spatio-temporal smoothing. Using pose detections as an intermediate representation between source and target, we learn a mapping from pose images to a target subject's appearance. We adapt this setup for temporally coherent video generation including realistic face synthesis.

    Caroline Chan, Shiry Ginosar, Tinghui Zhou and Alexei A. Efros, Everybody Dance Now, ICCV 2019. PDF, Video, Project Page, Check out the Sway: Magic Dance App!

    @inproceedings{Chan2019dance,
      author = {Chan, Caroline and Ginosar, Shiry and Zhou, Tinghui and Efros, Alexei A.},
      title = {Everybody Dance Now},
      booktitle = {IEEE International Conference on Computer Vision (ICCV)},
      year = 2019,
      }


  • object detections in breughel paintings

    The Burgeoning Computer-Art Symbiosis

    "Computers help us understand art. Art helps us teach computers."

    Shiry Ginosar, Xi Shen, Karan Dwivedi, Elizabeth Honig, and Mathieu Aubry, The Burgeoning Computer-Art Symbiosis, XRDS: Crossroads, The ACM Magazine for Students - Computers and Art archive Volume 24 Issue 3, Spring 2018, Pages 30-33. PDF

    @article{ginosar2018art,
      author = {Ginosar, Shiry and Shen, Xi and Dwivedi, Karan and Honig, Elizabeth and Aubry, Mathieu},
      title = {The Burgeoning Computer-art Symbiosis},
      journal = {XRDS},
      issue_date = {Spring 2018},
      volume = {24},
      number = {3},
      month = apr,
      year = {2018},
      issn = {1528-4972},
      pages = {30--33},
      numpages = {4},
      url = {http://doi.acm.org/10.1145/3186655},
      doi = {10.1145/3186655},
      acmid = {3186655},
      publisher = {ACM},
      address = {New York, NY, USA},
    }


  • hair fashions per decade

    A Century of Portraits

    "What makes the 60's look like the 60's?"
    Many details about our world are not captured in written records because they are too mundane or too abstract to describe in words. Fortunately, since the invention of the camera, an ever-increasing number of photographs capture much of this otherwise lost information. This plethora of artifacts documenting our “visual culture” is a treasure trove of knowledge as yet untapped by historians. We present a dataset of 37,921 frontal-facing American high school yearbook photos that allow us to use computation to glimpse into the historical visual record too voluminous to be evaluated manually. The collected portraits provide a constant visual frame of reference with varying content. We can therefore use them to consider issues such as a decade’s defining style elements, or trends in fashion and social norms over time.

    Shiry Ginosar, Kate Rakelly, Sarah Sachs, Brian Yin, Crystal Lee, Philipp Krähenbühl and Alexei A. Efros, A Century of Portraits: A Visual Historical Record of American High School Yearbooks, Extreme Imaging Workshop, ICCV 2015. and IEEE Transactions on Computational Imaging, September 2017. PDF, Project Page

    @ARTICLE{ginosar2017yearbooks,
      author={Ginosar, Shiry and Rakelly, Kate and Sachs, Sarah M. and Yin, Brian and Lee, Crystal and Krähenbühl, Philipp and Efros, Alexei A.},
      journal={IEEE Transactions on Computational Imaging},
      title={A Century of Portraits: A Visual Historical Record of American High School Yearbooks},
      year={2017},
      volume={3},
      number={3},
      pages={421-431},
      keywords={Data mining;Face;Imaging;Market research;Sociology;Statistics;Visualization;Data mining;deep learning;historical data;image dating},
      doi={10.1109/TCI.2017.2699865},
      month={Sept}
    }


  • object detection in a Picasso image

    Object Detection in Abstract Art

    The human visual system is just as good at recognizing objects in paintings and other abstract depictions as it is recognizing objects in their natural form. Computer vision methods can also recognize objects outside of natural images, however their model of the visual world may not always align with the human one. If the goal of computer vision is to mimic the human visual system, then we must strive to align detection models with the human one. We propose to use Picasso's Cubist paintings to test whether detection methods mimic the human invariance to object fragmentation and part re-organization. We find that while humans significantly outperform current methods, human perception and part-based object models exhibit a similarly graceful degradation as abstraction increases, further corroborating the theory of part-based object representation in the brain.

    Shiry Ginosar, Daniel Haas, Timothy Brown, Jitendra Malik, Detecting People in Cubist Art, Visart Workshop on Computer Vision for Art Analysis, ECCV 2014. PDF

    @incollection{ginosar2014detecting,
      title={Detecting people in Cubist art},
      author={Ginosar, Shiry and Haas, Daniel and Brown, Timothy and Malik, Jitendra},
      booktitle={Computer Vision-ECCV 2014 Workshops},
      pages={101--116},
      year={2014},
      publisher={Springer International Publishing}
    }


  • speech interface for document coding

    Using Speech Recognition in Information Intensive Tasks

    Speech input is growing in importance, especially in mobile applications, but less research has been done on speech input for information intensive tasks like document editing and coding. This paper presents results of a study on the use of a modern publicly available speech recognition system on document coding.

    Shiry Ginosar, Marti A. Hearst, A Study of the Use of Current Speech Recognition in an Information Intensive Task, Workshop on Designing Speech and Language Interactions, CHI 2014. PDF


  • multi-stage code examples editor

    Editable Code Histories

    An IDE extension that helps with the task of authoring multi-stage code examples by allowing the author to propagate changes (insertions, deletions and modifications) throughout multiple saved stages of their code.

    Shiry Ginosar, Luis Fernando De Pombo, Maneesh Agrawala, Bjoern Hartmann, Authoring Multi-Stage Code Examples with Editable Code Histories, UIST 2013. PDF, Video

    @inproceedings{ginosar2013authoring,
      title={Authoring multi-stage code examples with editable code histories},
      author={Ginosar, Shiry and Pombo, De and Fernando, Luis and Agrawala, Maneesh and Hartmann, Bjorn},
      booktitle={Proceedings of the 26th annual ACM symposium on User interface software and technology},
      pages={485--494},
      year={2013},
      organization={ACM}
    }


  • crowdsourced data analysis workflow

    Crowdsourced Data Analysis

    A system that lets analysts use paid crowd workers to explore data sets and helps analysts interactively examine and build upon workers' insights.

    Wesley Willett, Shiry Ginosar, Avital Steinitz, Bjoern Hartmann, Maneesh Agrawala, Identifying Redundancy and Exposing Provenance in Crowdsourced Data Analysis, IEEE Transactions on Visualization and Computer Graphics, 2013. PDF

    @article{willett2013identifying,
      title={Identifying Redundancy and Exposing Provenance in Crowdsourced Data Analysis},
      author={Willett, Wesley and Ginosar, Shiry and Steinitz, Avital and Hartmann, Bjorn and Agrawala, Maneesh},
      journal={Visualization and Computer Graphics, IEEE Transactions on},
      volume={19},
      number={12},
      pages={2198--2206},
      year={2013},
      publisher={IEEE}
    }


  • phetch game logo

    Phetch - A Human Computation Game

    Phetch is an online game which collects natural language descriptions for images on the web as a side effect of game play. Can be used to improve the accessibility of the web as well as improve upon current image search engines.

    Shiry Ginosar, Human Computation for HCIR Evaluation, Proceedings, HCIR 2007, pp. 40-42. PDF

    Luis von Ahn, Shiry Ginosar, Mihir Kedia, Manuel Blum, Improving Image Search with Phetch, ICASSP 2007. PDF,

    Luis von Ahn, Shiry Ginosar, Mihir Kedia, Ruoran Liu and Manuel Blum, Improving Accessibility of the Web with a Computer Game, CHI 2006. Honorable mentioned paper and nominee for Best of CHI award. PDF, Press Coverage

    @inproceedings{von2006improving,
      title={Improving accessibility of the web with a computer game},
      author={Von Ahn, Luis and Ginosar, Shiry and Kedia, Mihir and Liu, Ruoran and Blum, Manuel},
      booktitle={Proceedings of the SIGCHI conference on Human Factors in computing systems},
      pages={79--82},
      year={2006},
      organization={ACM}
    }

Other Projects

  • H20-IQ device

    H20-IQ

    A tablet-controlled, solar-powered drip irrigation system. A humidity sensor at the tip of each "spike" records soil moisture; an internal servo in the 3D-printed enclosure opens and closes a drip irrigation line valve. Individual devices in a garden communicate with a central garden server, which also acts as a webserver that hosts the HTML-based user interface. Gardeners can review graphs of humidity readings over time and adjust waterning plans through this Web application.

    Joint class project with Valkyrie Savage and Mark Fuge.

    Featured in Bjoern Hartmann and Paul K. Wright Designing Bespoke Interactive Devices, IEEE Computer August 2013, Volume 46, Number 8. Article

Teaching

Co-Teacher and GSI, Image Manipulation and Computational Photography, Fall 2018
GSI, Image Manipulation and Computational Photography, Fall 2014

Undergraduate and MA Researchers

Former Students
Vivien Nguyen (Now @ Princeton)
Varsha Ramakrishnan
Gefen Kohavi
Caroline Mai Chan (Now @ MIT)
Hemang Jeetendra Jangle
Daniel Tsai
Crystal Lee
Kate Rakelly (Now @ UC Berkeley)
Brian Yin
Sarah Sachs
Timothy Brown
Luis Fernando de Pombo