U.C. Berkeley CS267
Applications of Parallel Computers
Spring 2014
Tentative Syllabus
Abstract
This course teaches both graduate and advanced undergraduate students from diverse departments how use parallel computers both efficiently
and productively, i.e. how to write programs that run fast while minimizing
programming effort. The latter is increasingly important since essentially all computers are (becoming) parallel, from supercomputers to laptops.
So beyond teaching the basics about parallel computer architectures and programming languages, we emphasize commonly used patterns that appear in essentially all programs that need to run fast. These patterns include both common computations (eg linear algebra, graph algorithms, structured grids,..)
and ways to easily compose these into larger programs.
We show how to recognize these patterns in a variety of practical
problems, efficient (sometimes optimal) algorithms for implementing them,
how to find existing efficient implementations of these patterns when available,
and how to compose these patterns into larger applications.
We do this in the context of the most important parallel programming models today:
shared memory (eg PThreads and OpenMP on your multicore laptop),
distributed memory (eg MPI and UPC on a supercomputer), GPUs (eg CUDA and OpenCL, which could be both in your laptop and supercomputer), and cloud computing (eg MapReduce and Hadoop). We also present a variety of useful
tools for debugging correctness and performance of parallel programs.
Finally, we have a variety of guest lectures by a variety of experts,
including parallel climate modeling, astrophysics, and other topics.
High-Level Description
This syllabus may be modified during the semester,
depending on feedback from students and the availability
of guest lecturers. Topics that we have covered
before and intend to cover this time too are shown in standard font below,
and possible extra topics (some presented in previous classes, some new)
are in italics.
After this high level description, we give
the currently planned schedule of lectures
(Updated Jan 20)(subject to change).
Detailed Schedule of Lectures (updated Jan 20)(subject to change)
(lecturers shown in parentheses)
Jan 21 (Tuesday): Introduction: Why Parallel Computing?
(Jim Demmel)
Jan 23 (Thursday): Single processor machines: Memory hierarchies and processor features
(Jim Demmel)
Jan 28 (Tuesday): Introduction to parallel machines and programming models
(Jim Demmel)
Jan 30 (Thursday): Sources of parallelism and locality in simulation: Part 1
(Jim Demmel)
Feb 4 (Tuesday): Sources of parallelism and locality in simulation: Part 2
(Jim Demmel)
Feb 6 (Thursday): Shared memory machines and programming: OpenMP and Threads; Tricks with Trees
(Jim Demmel)
Feb 11 (Tuesday): Distributed memory machines and programming in MPI (Jim Demmel)
Feb 13 (Thursday): Performance and Debugging Tools (David Skinner and Richard Gerber, NERSC)
Feb 18 (Tuesday): Programming in Unified Parallel C (UPC) (Kathy Yelick)
Feb 20 (Thursday): Cloud Computing (Shivaram Venkataraman)
Feb 25 (Tuesday): GPUs, and programming with with CUDA and OpenCL (Bryan Catanzaro, NVIDIA)
Feb 27 (Thursday): Dense Linear Algebra: Part 1 (Jim Demmel)
Mar 4 (Tuesday): Dense Linear Algebra: Part 2 (Jim Demmel)
Mar 6 (Thursday): Graph Partitioning: Part 1 (Jim Demmel)
Mar 11 (Tuesday): Graph Partitioning: Part 2, and Sparse-Matrix-Vector-Multiply (Jim Demmel)
Mar 13 (Thursday): Sparse-Matrix-Vector-Multiply and Autotuning (Jim Demmel)
Mar 18 (Tuesday): TBD
Mar 20 (Thursday): TBD
Mar 24-28: Spring Break
Apr 1 (Tuesday): TBD
Apr 3 (Thursday): TBD
Apr 8 (Tuesday): TBD
Apr 10 (Thursday): TBD
Apr 15 (Tuesday): TBD
Apr 17 (Thursday): TBD
Apr 22 (Tuesday): TBD
Apr 24 (Thursday): TBD
Apr 29 (Tuesday): TBD
May 1 (Thursday): TBD
May 8 (Thursday): Student Poster Session