U.C. Berkeley CS267 Home Page

Applications of Parallel Computers

Spring 2013

T Th 9:30-11:00, 250 Sutardja Dai Hall
(Overflow room: 240 Sutardja Dai Hall)


  • Jim Demmel
  • Offices:
    564 Soda Hall ("Virginia", in ParLab), (510)643-5386
    831 Evans Hall
  • Office Hours: T 11-12 in 564 Soda, Th 2-3 in 564 Soda, F 11-12 in 831 Evans
  • (send email)
  • Teaching Assistants:

  • David Sheffield
  • Office: 5th Floor Soda Hall (Parlab)
  • Office Hours: T Th 2-3pm, in 562B Soda Hall
  • (send email)
  • Michael Driscoll
  • Office: 5th Floor Soda Hall (Parlab)
  • Office Hours: M 3-4pm, W 10-11am, in 651 Soda Hall
  • (send email)
  • Administrative Assistants:

  • Tammy Johnson
  • Office: 565 Soda Hall
  • Phone: (510)643-4816
  • (send email)
  • Roxana Infante
  • Office: 563 Soda Hall
  • Phone: (510)643-1455
  • (send email)
  • Webcasting of lectures, in either Flash format or Windows Media Video (WMV) format

  • Flash format has higher quality than WMV format.
  • Viewers using OSX operating systems may need to install Adobe Flash Player from the following Adobe website. Viewers with devices running iOS similar to ipad and iphone, may install and use this web browser on their mobile devices.
  • To ask questions during live lectures, you have two options:
  • You can email them to this address, which the teaching assistants will be monitoring during lecture.
  • You can use the chat box at the bottom of the webpage of Class Resources and Homework Assignments.
  • These Flash and WMV links are active during lectures only; archived video will be posted after lecture here.
  • Syllabus and Motivation

    CS267 was originally designed to teach students how to program parallel computers to efficiently solve challenging problems in science and engineering, where very fast computers are required either to perform complex simulations or to analyze enormous datasets. CS267 is intended to be useful for students from many departments and with different backgrounds, although we will assume reasonable programming skills in a conventional (non-parallel) language, as well as enough mathematical skills to understand the problems and algorithmic solutions presented. CS267 satisfies part of the course requirements for a new Designated Emphasis ("graduate minor") in Computational Science and Engineering.

    While this general outline remains, a large change in the computing world has started in the last few years: not only are the fastest computers parallel, but nearly all computers will soon be parallel, because the physics of semiconductor manufacturing will no longer let conventional sequential processors get faster year after year, as they have for so long (roughly doubling in speed every 18 months for many years). So all programs that need to run faster will have to become parallel programs. (It is considered very unlikely that compilers will be able to automatically find enough parallelism in most sequential programs to solve this problem.) For background on this trend toward parallelism, click here.

    This will be a huge change not just for science and engineering but the entire computing industry, which has depended on selling new computers by running their users' programs faster without the users having to reprogram them. Large research activities to address this issue are underway at many computer companies and universities, including Berkeley's ParLab, whose research agenda is outlined here.

    While the ultimate solutions to the parallel programming problem are far from determined, students in CS267 will get the skills to use some of the best existing parallel programming tools, and be exposed to a number of open research questions.

  • Tentative Detailed Syllabus
  • Grading

    There will be several programming assignments to acquaint students with basic issues in memory locality and parallelism needed for high performance. Most of the grade will be based on a final project (in which students are encouraged to work in small interdisciplinary teams), which could involve parallelizing an interesting application, or developing or evaluating a novel parallel computing tool. Students are expected to have identified a likely project by mid semester, so that they can begin working on it. We will provide many suggestions of possible projects as the class proceeds.

    Asking Questions

    Outside of lecture, you are welcome to bring your questions to office hours (posted at the top of this page). If you cannot physically attend office hours, you may contact the instructor team via the instructor email. We encourage you to post your questions to the CS267 Piazza page (sign up first). If you send a question to the instructor email, we may answer your question on Piazza if we think it might help others in the class. During lecture, you can ask questions over the Internet (see Google Chat link at bottom of the resources page). You will submit homeworks via the instructor email - please check with assignment-specific submission instructions first.

    Class Projects

    You are welcome to suggest your own class project, but you may also look at the following sites for ideas:

  • the ParLab webpage,
  • the Computational Research Division and NERSC webpages at LBL,
  • class posters from CS267 in Spring 2010
  • class posters and their brief oral presentations from CS267 in Spring 2009.
  • Announcements

  • (Feb 21) Homework #2 has been posted here; part 1 is due March 5, and part 2 is due March 14.
  • (Jan 24) The overflow lecture room has been moved to 240 SDH, instead of 254 SDH.
  • (Jan 24) On March 7 the main lecture will be held in 630 SDH, instead of 250 SDH.
  • (Jan 21) Flu Policy: The University has asked us to announce to students that they should not come to class if they become ill. The University has adopted the CDC recommendation that members of the campus community who develop flu-like illness should self-isolate until at least 24 hours after they are free of fever or signs of fever without the use of medication. Students should follow this recommendation in deciding whether or not to come to class.
  • (Jan 21) For students who want to try some on-line self-paced courses to improve basic programming skills, click here. You can use this material without having to register. In particular, courses like CS 9C (for programming in C) might be useful.
  • (Jan 21) Please complete the following class survey.
  • (Jan 21) Homework Assignment 0 has been posted here, due Feb 7 by midnight.
  • (Jan 21) Please read the NERSC Computer Use Policy Form so that you can sign a form saying that you agree to abide by the rules state there.
  • (Jan 21) This course satisfies part of the course requirements for a new Designated Emphasis ("graduate minor") in Computational Science and Engineering.
  • Class Resources and Homework Assignments.

  • This will include, among other things, class handouts, homework assignments, the class roster, information about class accounts, pointers to documentation for machines and software tools we will use, reports and books on supercomputing, pointers to old CS267 class webpages (including old class projects), and pointers to other useful websites.
  • Please read the NERSC Computer Use Policy Form so that you can sign a form saying that you agree to abide by the rules state there.
  • Lecture Notes and Video

  • Live video of the lectures may be seen either in Flash format or Windows Media Video (WMV) format. Flash format has higher quality than WMV format.
  • Viewers using OSX operating systems may need to install Adobe Flash Player from the following Adobe website. Viewers with devices running iOS similar to ipad and iphone, may install and use this web browser on their mobile devices.
  • Archived video, posted after the lectures, may be found here
  • Notes from previous offerings of CS267 are posted on old class webpages available under Class Resources
  • In particular, the web page from the 1996 offering has detailed, textbook-style notes available on-line that are still largely up-to-date in their presentations of parallel algorithms (the slides to be posted during this semester will contain some more recently invented algorithms as well).

  • Lectures from Spr 2013 will be posted here.
  • Lecture 1, Jan 22, Introduction, in ppt
  • Lecture 2, Jan 24, Single Processor Machines: Memory Hierarchies and Processor Features, in ppt
  • Lecture 3, Jan 29, complete Lecture 2 (updated Jan 29, 9:20am), then
    Introduction for Parallel Machines and Programming Models, in ppt
  • Lecture 4, Jan 31, complete Lecture 3 (updated Jan 31, 7:05am), then
    Sources of Parallelism and Locality in Simulation - Part 1 in ppt
  • Lecture 5, Feb 5, complete Lecture 4 (updated Feb 5, 7:00am), then
    Sources of Parallelism and Locality in Simulation - Part 2 in ppt
  • Lecture 6, Feb 7, Shared Memory Programming: Threads and OpenMP, in ppt, then Tricks with Trees, in ppt
  • Lecture 7, Feb 12, Complete Lecture 6: Tricks with Trees, then Distributed Memory Machines and Programming, in ppt
  • Lecture 8, Feb 14, Partitioned Global Address Space Programming with Unified Parallel C (UPC), in ppt, by Kathy Yelick; Hints for Homework #1, in ppt
  • Lecture 9, Feb 19. Complete Lecture 7 on Distributed Memory Machines and Programming (updated Feb 19, 7:00pm). This continue with a two part lecture:
  • Performance Debugging Techniques for HPC Applications (pdf) by David Skinner
  • Debugging and Optimization Tools (pdf) by Richard Gerber
  • NERSC web site with more documentation and videos about using tools at NERSC
  • Lecture 10, Feb 21, An Introduction to CUDA/OpenCL and Graphics Processors (pdf), by Bryan Catanzaro
  • Lecture 11, Feb 26, Dense Linear Algebra - Part 1, in ppt
  • Lecture 12, Feb 28, Complete Lecture 11 on Dense Linear Algebra, Part 1, then
    Dense Linear Algebra, Part 2
  • Lecture 13, Mar 5, Graph Partitioning, in ppt
  • Lecture 14, Mar 7. Complete Lecture 13 on Graph Partitioning, then Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication ppt
  • Lecture 15, Mar 12. Complete Lecture 14 on Automatic Performance Tuning and Sparse-Matrix-Vector-Multiplication ppt
  • Lecture 16, Mar 14. Hierarchical Methods for the N-body Problem, in pptx
  • Lecture 17, Mar 19. Complete Hierarchical Methods for the N-body Problem (updated Mar 19, 9:03am)
  • Lecture 18, Mar 21. Structured Grids, in ppt, and Class Project Suggestions, in pptx,
  • Lecture 19, Apr 2. Fast Fourier Transform (FFT), in ppt,
  • Lecture 20, Apr 4. Dynamic Load Balancing, in ppt,
  • Lecture 21, Apr 9. Parallel Graph Algorithms, in pptx, presented by Aydin Buluc
  • Lecture 22, Apr 11. Architecting Parallel Software with Patterns, in pptx, presented by Kurt Keutzer
  • Lecture 23, Apr 16. Frameworks in Complex Multiphysics HPC Applications, in pptx, (updated 4/17/13, 9:15am) presented by John Shalf
  • Lecture 24, Apr 18. Accelerated Materials Design through High-throughput First-Principles Calculations and Data Mining, in pptx, presented by Kristin Persson
  • Lecture 25, Apr 23. Big Data Processing with MapReduce and Spark, in pptx, presented by Matei Zaharia
  • Lecture 26, Apr 25. Modeling and Predicting Climate Change, in ppt, Movie of high resolution climate models, presented by Michael Wehner
  • Lecture 27, Apr 30, Big Bang, Big Data, Big Iron: High Performance Computing and the Cosmic Microwave Background, in pptx, presented by Julian Borrill
  • Lecture 28, Exascale Computing, in pptx, presented by Katherine Yelick
  • Sharks and Fish

  • "Sharks and Fish" are a collection of simplified simulation programs that illustrate a number of common parallel programming techniques in various programming languages (some current ones, and some old ones no longer in use).
  • Basic problem description, and (partial) code from 1999 class, written in Matlab, CMMD, CMF, Split-C, Sun Threads, and pSather, available here.
  • Code (partial) from 2004 class, written in MPI, pthreads, OpenMP, available here.