Daniel Seita

I am a computer science PhD student at the University of California, Berkeley, working on robotic manipulation and machine learning as part of Berkeley Artificial Intelligence Research (BAIR). I am interested in developing robotic systems for deployment in complex, unstructured environments, such as in surgical robotics and assitive home robotics.

I am fortunate to be advised by John Canny and Ken Goldberg. I am generously supported by a Graduate Fellowships for STEM Diversity (from 2015 to 2021), funded through the National Security Agency. I am originally from Albany, New York, and came to robotics at UC Berkeley through a long and winding road.

Email / Github / Google Scholar / Twitter / Biography
CV (March 2021) / Research Statement (April 2021)

I am planning to graduate this year, and am searching for research scientist and postdoc positions.

profile photo
News and Updates

In reverse chronological order:

  • 04/30/2021: The New York Times featured our work on surgical robotics!
  • 04/xx/2021: Invited research talks at Williams, Stanford, CMU, and Berkeley (BAIR).
  • 03/19/2021: Invited research talk at the University of Toronto. (Video)
  • 02/28/2021: Four research papers are accepted to ICRA 2021. We will be presenting virtually.
  • 02/17/2021: Invited research talk at Siemens' robotics team.
  • 12/31/2020: Multiple preprints are available on visual servoing, imitation learning, and deformable manipulation.
  • 07/18/2020: Papers on fabric smoothing and surgical calibration were accepted to IROS 2020 and RA-Letters 2020.
  • 05/05/2020: Our work on VisuoSpatial Foresight was accepted to RSS 2020! We will be presenting virtually.
  • 05/05/2020: We released a new BAIR Blog post about some of our work in robot fabric manipulation.
  • 03/23/2020: I was interviewed by Sayak Paul of PyImageSearch. You can read it here on Medium.
  • 03/20/2020: We have three new preprints available, and an update to the 2019 preprint on fabric smoothing.
  • 03/20/2020: Our paper on surgical peg transfer was accepted to ISMR 2020 (postponed to November).
  • 03/05/2020: I'll be interning at Google Brain this summer (remotely), working with the robotics team!
  • 10/08/2019: I attended ISRR 2019 in Vietnam and presented our paper on robot bed-making.
  • 10/01/2019: We have a new preprint on fabric smoothing, and a paper at the Deep RL workshop at NeurIPS 2019.
  • 07/31/2019: Our paper on robotic cloth manipulation and bed-making has been accepted to ISRR 2019.
  • 10/23/2018: We have a new BAIR Blog post about work in the AUTOLAB related to depth sensing.
  • 04/24/2018: I passed my PhD qualifying exam. Please see the bottom of this website for a transcript.
  • 01/11/2018: Our paper on surgical debridement and calibration has been accepted to ICRA 2018.
  • 08/02/2017: We wrote a BAIR Blog post about our work on minibatch Metropolis-Hastings.

Recent Talk

Here is a talk I gave "at" the University of Toronto in March 2021, which provides a representative overview of my research.

Research Publications

Below, you can find my publications, as well as links to code, relevant blog posts, and paper reviews. I strongly believe that researchers should make code publicly available. Our code is usually on GitHub where you can file issue reports with questions.

I list papers under review (i.e., "preprints") first, followed by papers at accepted conferences, journals, or other venues in reverse chronological order. If a paper is on arXiv, that's where you can find the latest version. As is standard in our field, authors are ordered by contribution level, and asterisks (*) represent equality.

Life is short. :-) If you only have time to read one or two of the papers below, then I recommend our recent ICRA 2021 paper "Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks" or our RSS 2020 paper "VisuoSpatial Foresight for Multi-Step, Multi-Task Fabric Manipulation" (or its journal paper extension).

Self-Supervised Learning of Dynamic Planar Manipulation of Free-End Cables
Jonathan Wang*, Huang Huang*, Vincent Lim, Harry Zhang, Jeffrey Ichnowski, Daniel Seita, Yunliang Chen, Ken Goldberg
Preprint, in submission, March 2021.
[Coming Soon]

Coming soon!

LazyDAgger: Reducing Context Switching in Interactive Imitation Learning
Ryan Hoque, Ashwin Balakrishna, Carl Putterman, Michael Luo, Daniel Brown, Daniel Seita, Brijen Thananjeyan, Ellen Novoseller, Ken Goldberg
Preprint, in submission, March 2021.
[arXiv] [Project Website and Code] [BibTeX]

We propose a way to perform interactive imitation learning in a way that minimizes the amount of context switching that occurs when an agent and a supervisor swap control.

VisuoSpatial Foresight for Physical Sequential Fabric Manipulation
Ryan Hoque*, Daniel Seita*, Ashwin Balakrishna, Aditya Ganapathi, Ajay Kumar Tanwani, Nawid Jamali, Katsu Yamane, Soshi Iba, Ken Goldberg
Preprint, in submission, February 2021.
[arXiv] [Project Website and Code] [BibTeX] [Blog Post]

This is an extension of our RSS 2020 conference paper which presented VisuoSpatial Foresight (VSF). Here, we systematically explore different ways to improve different stages of the VSF pipeline, and find that adjusting the data generation enables better physical fabric folding.

Superhuman Surgical Peg Transfer Using Depth-Sensing and Deep Recurrent Neural Networks
Minho Hwang, Brijen Thananjeyan, Daniel Seita, Jeffrey Ichnowski, Samuel Paradis, Danyal Fer, Thomas Low, Ken Goldberg
Preprint, in submission, December 2020.
[arXiv] [Project Website and Code] [BibTeX] [Media Coverage (NYTimes)]

We use depth sensing, recurrent neural networks, and a new trajectory optimizer to get an automated surgical robot to outperform a human surgical resident on the peg transfer task.

This is an extension of our ISMR 2020 and IEEE RA-Letters 2020 papers on surgical peg transfer.

Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks
Daniel Seita, Pete Florence, Jonathan Tompson, Erwin Coumans, Vikas Sindhwani, Ken Goldberg, Andy Zeng
IEEE International Conference on Robotics and Automation (ICRA), May 2021. (Virtual)
[arXiv] [Project Website and Code] [BibTeX] [Reviews]

We design a suite of tasks for benchmarking deformable object manipulation, including 1D cables, 2D fabrics, and 3D bags. We use Transporter Networks for learning how to manipulate some of these tasks, and for others, we design goal-conditioned variants.

Robots of the Lost Arc: Self-Supervised Learning to Dynamically Manipulate Fixed-Endpoint Cables
Harry Zhang, Jeffrey Ichnowski, Daniel Seita, Jonathan Wang, Huang Huang, Ken Goldberg
IEEE International Conference on Robotics and Automation (ICRA), May 2021. (Virtual)
[arXiv] [Project Website and Code] [BibTeX] [Reviews]

We propose a method to enable a UR5 arm to perform high speed dynamic rope manipulation tasks. We use a parabolic action motion and predict the single apex point of this motion.

Learning Dense Visual Correspondences in Simulation to Smooth and Fold Real Fabrics
Aditya Ganapathi, Priya Sundaresan, Brijen Thananjeyan, Ashwin Balakrishna, Daniel Seita, Jennifer Grannen, Minho Hwang, Ryan Hoque, Joseph Gonzalez, Nawid Jamali, Katsu Yamane, Soshi Iba, Ken Goldberg
IEEE International Conference on Robotics and Automation (ICRA), May 2021. (Virtual)
[arXiv] [Project Website and Code] [BibTeX] [Reviews]

We use dense object nets trained on simulated data and apply it to fabric manipluation tasks. Since we train correspondences, we can take an action applied on a fabric, and "map" the corresponding action to a new fabric setup. We have an IROS 2020 workshop paper that extends this idea to multi-modal distributions. [arXiv]

Intermittent Visual Servoing: Efficiently Learning Policies Robust to Tool Changes for High-precision Surgical Manipulation
Samuel Paradis, Minho Hwang, Brijen Thananjeyan, Jeffrey Ichnowski, Daniel Seita, Danyal Fer, Thomas Low, Joseph E. Gonzalez, Ken Goldberg
IEEE International Conference on Robotics and Automation (ICRA), May 2021. (Virtual)
[arXiv] [Project Website and Code] [BibTeX] [Reviews]

We propose a framework which uses a coarse controller in free space, and uses imitation learning to learn precise actions in regions that mandate the most accuracy. We test on the peg transfer task and show high success rates, and transferrability of the learned model across multiple surgical arms.

Deep Imitation Learning of Sequential Fabric Smoothing From an Algorithmic Supervisor
Daniel Seita, Aditya Ganapathi, Ryan Hoque, Minho Hwang, Edward Cen, Ajay Kumar Tanwani, Ashwin Balakrishna, Brijen Thananjeyan, Jeffrey Ichnowski, Nawid Jamali, Kastu Yamane, Soshi Iba, John Canny, Ken Goldberg
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2020. (Virtual)
[arXiv] [Project Website and Code] [BibTeX] [Reviews] [Blog Post]

We design a custom fabric simulator, and script a corner-pulling demonstrator to train a fabric smoothing policy entirely in simulation using imitation learning. We transfer the policy to a physical da Vinci surgical robot.

Efficiently Calibrating Cable-Driven Surgical Robots with RGBD Fiducial Sensing and Recurrent Neural Networks
Minho Hwang, Brijen Thananjeyan, Samuel Paradis, Daniel Seita, Jeffrey Ichnowski, Danyal Fer, Thomas Low, Ken Goldberg
IEEE Robotics and Automation Letters (RA-L), October 2020.
[arXiv] [Project Website and Code] [BibTeX]

We propose a method for calibrating the da Vinci surgical robot using balls attached to the end-effector. We apply it to the peg transfer task.

VisuoSpatial Foresight for Multi-Step, Multi-Task Fabric Manipulation
Ryan Hoque*, Daniel Seita*, Ashwin Balakrishna, Aditya Ganapathi, Ajay Tanwani, Nawid Jamali, Katsu Yamane, Soshi Iba, Ken Goldberg
Robotics: Science and Systems (RSS), July 2020. (Virtual)
[arXiv] [Project Website and Code] [BibTeX] [Reviews] [Blog Post]

We propose VisuoSpatial Foresight, an extension of visual foresight that additionally uses depth information, and use it for predicting what fabric observations (i.e., images) will look like given a series of actions.

We have since extended this paper into a journal submission (noted above).

Applying Depth-Sensing to Automated Surgical Manipulation with a da Vinci Robot
Minho Hwang*, Daniel Seita*, Brijen Thananjeyan, Jeffrey Ichnowski, Samuel Paradis, Danyal Fer, Thomas Low, Ken Goldberg
International Symposium on Medical Robotics (ISMR), April 2020. (Virtual)
[arXiv] [Project Website and Code] [BibTeX] [Reviews]

We propose a method for automating the surgical peg transfer task. The method uses depth sensing and block detection algorithms to determine where to pick and place blocks.

ZPD Teaching Strategies for Deep Reinforcement Learning from Demonstrations
Daniel Seita, Chen Tang, Roshan Rao, David Chan, Mandi Zhao, John Canny
Deep Reinforcement Learning Workshop at Neural Information Processing Systems (NeurIPS), December 2019. Vancouver, Canada.
[arXiv] [Code] [BibTeX]

We investigate whether it makes sense to provide samples that are at a reasonable level of "difficulty" for a learner agent, and empirically test on the standard Atari 2600 benchmark.

Deep Transfer Learning of Pick Points on Fabric for Robot Bed-Making
Daniel Seita*, Nawid Jamali*, Michael Laskey*, Ron Berenstein, Ajay Tanwani, Prakash Baskaran, Soshi Iba, John Canny, Ken Goldberg
International Symposium on Robotics Research (ISRR), October 2019. Hanoi, Vietnam.
[arXiv] [Project Website and Code] [BibTeX] [Reviews] [Blog Post]

We propose a system for robotic bed-making using a quarter-scale bed, which involves collecting real data and using color and depth information to detect blanket corners for pulling. This is applied on two mobile robots: the HSR and the Fetch.

Risk Averse Robust Adversarial Reinforcement Learning
Xinlei Pan, Daniel Seita, Yang Gao, John Canny
IEEE International Conference on Robotics and Automation (ICRA), May 2019. Montreal, Canada.
[arXiv] [Project Website and Code] [BibTeX] [Reviews]

We show how an ensemble of Q-networks can improve robustness of reinforcement learning. We use the ensemble to estimate variance. In simulated autonomous driving using TORCS, robust policies can better handle an adversary.

Fast and Reliable Autonomous Surgical Debridement with Cable-Driven Robots Using a Two-Phase Calibration Procedure.
Daniel Seita, Sanjay Krishnan, Roy Fox, Stephen McKinley, John Canny, Ken Goldberg
IEEE International Conference on Robotics and Automation (ICRA), May 2018. Brisbane, Australia.
[arXiv] [Project Website and Code] [BibTeX] [Reviews]

We show how to use the da Vinci Research Kit at rapid speeds while maintaining reliability, and apply this for a model of the surgical debridement task.

An Efficient Minibatch Acceptance Test for Metropolis-Hastings.
Daniel Seita, Xinlei Pan, Haoyu Chen, John Canny
Conference on Uncertainty in Artificial Intelligence (UAI), August 2017. Sydney, Australia.
(Oral Presentation, Honorable Mention for Best Student Paper)
[arXiv] [Code] [BibTeX] [Reviews] [Blog Post] [Slides]

We show how to approximate the Metropolis-Hastings test using a minibatch of data by cleverly utilizing a "correction" distribution.

Large-Scale Supervised Learning of the Grasp Robustness of Surface Patch Pairs
Daniel Seita, Florian T. Pokorny, Jeffrey Mahler, Danica Kragic, Michael Franklin, John Canny, Ken Goldberg
IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR), December 2016. San Francisco, USA.
[PDF] [Code] [BibTeX] [Reviews]

We show that we can estimate (simulated) grasp robustness using fully connected neural networks with grasp patches as input.

Coursework, Teaching, and Oral Exams

I have taken many graduate courses as part of the PhD program at UC Berkeley, typically in computer science (CS) but also in electrical engineering (EE) and statistics (STAT). Some courses were new when I took them and had a "294-XYZ" number, before they took on a "regular" three-digit number.

I was also the GSI (i.e., Teaching Assistant) for the Deep Learning class in Fall 2016 and Spring 2019. The course is now numbered CS 182/282A, where the 182 is for undergrads and the 282A is for graduate students.

  • CS 267, Applications of Parallel Computing
  • CS 280, Computer Vision
  • CS 281A, Statistical Learning Theory
  • CS 182/282A, Deep Neural Networks (GSI/TA twice)
  • CS 287, Advanced Robotics
  • CS 288, Natural Language Processing
  • CS 294-112, Deep Reinforcement Learning (now CS 285)
  • CS 294-115, Algorithmic Human-Robot Interaction (now CS 287H)
  • CS 294-131, Special Topics in Deep Learning
  • EE 227BT, Convex Optimization
  • EE 227C, Convex Optimization and Approximation
  • STAT 210A, Theoretical Statistics (Classical)
  • STAT 210B, Theoretical Statistics (Modern)

At the time I took it, UC Berkeley had an oral preliminary exam requirement for PhD students. Here's the transcript of my prelims. Nowadays, things might have changed since the number of AI PhD students has skyrocketed.

There is also a second oral exam, called the qualifying exam. Here's the transcript of my qualifying exam.

Miscellaneous

Here are some links that might be of interest:

I frequently blog about (mostly) technical topics. This blog is not affiliated with my university or employer, and I deliberately use a different domain name from this "eecs.berkeley.edu" page to reinforce this separation.

I also recommend checking the Berkeley AI Research blog. I was one of its maintainers for its first four years, and it's been great to see how much the blog has grown since then.

Quixotic though it may sound, I hope to use computer science and robotics to change the world for the better. If you have thoughts on how to do this, feel free to contact me.


Template from Jon Barron. Last updated: May 02, 2021.