U.C. Berkeley CS267
Applications of Parallel Computers
Spring 2016
Tentative Syllabus
Abstract
This course teaches both graduate and advanced undergraduate students from diverse departments how use parallel computers both efficiently
and productively, i.e. how to write programs that run fast while minimizing
programming effort. The latter is increasingly important since essentially all computers are (becoming) parallel, from supercomputers to laptops.
So beyond teaching the basics about parallel computer architectures and programming languages, we emphasize commonly used patterns that appear in essentially all programs that need to run fast. These patterns include both common computations (eg linear algebra, graph algorithms, structured grids,..)
and ways to easily compose these into larger programs.
We show how to recognize these patterns in a variety of practical
problems, efficient (sometimes optimal) algorithms for implementing them,
how to find existing efficient implementations of these patterns when available,
and how to compose these patterns into larger applications.
We do this in the context of the most important parallel programming models today:
shared memory (eg PThreads and OpenMP on your multicore laptop),
distributed memory (eg MPI and UPC on a supercomputer), GPUs (eg CUDA and OpenCL, which could be both in your laptop and supercomputer), and cloud computing (eg MapReduce, Hadoop, and more recently Spark).
We also present a variety of useful
tools for debugging correctness and performance of parallel programs.
Finally, we have a variety of guest lectures by a variety of experts,
including parallel climate modeling, astrophysics, and other topics.
High-Level Description
This syllabus may be modified during the semester,
depending on feedback from students and the availability
of guest lecturers. Topics that we have covered
before and intend to cover this time too are shown in standard font below,
and possible extra topics (some presented in previous classes, some new)
are in italics.
After this high level description, we give
the currently planned schedule of lectures
(subject to change).
Detailed Schedule of Lectures (subject to change)
(guest lecturers shown in parentheses)
Jan 19 (Tuesday): Introduction: Why Parallel Computing?
Jan 21 (Thursday): Single processor machines: Memory hierarchies and processor features
Jan 26 (Tuesday): Introduction to parallel machines and programming models
Jan 28 (Thursday): Sources of parallelism and locality in simulation: Part 1
Feb 2 (Tuesday): Sources of parallelism and locality in simulation: Part 2
Feb 4 (Thursday): Shared memory machines and programming: OpenMP and Threads; Tricks with Trees
Feb 9 (Tuesday): Distributed memory machines and programming in MPI
Feb 11 (Thursday): Programming in Unified Parallel C (UPC and UPC++) (Kathy Yelick, UCB and LBNL)
Feb 16 (Tuesday): Cloud Computing (Shivaram Venkataraman, UCB)
Feb 18 (Thursday): Performance and Debugging Tools (Richard Gerber, NERSC/LBNL)
Feb 23 (Tuesday): GPUs, and programming with with CUDA and OpenCL (Forrest Iandola, UCB)
Feb 25 (Thursday): Dense linear algebra, Part 1
Mar 1 (Tuesday): Dense linear algebra, Part 2
Mar 3 (Thursday): Graph Partitioning
Mar 8 (Tuesday): Automatic performance tuning and Sparse Matrix Vector Multiplication
Mar 10 (Thursday): (continued)
Mar 15 (Tuesday): Structured Grids, Class Project Suggestions
Mar 17 (Thursday): Parallel Graph Algorithms (Aydin Buluc, LBNL)
Mar 21-25: Spring Break
Mar 29 (Tuesday): Architecting Parallel Software with Patterns, (Kurt Keutzer, UCB)
Mar 31 (Thursday): Frameworks in Complex Multiphysics HPC Applications (John Shalf, LBNL)
Apr 5 (Tuesday): Modeling and Predicting Climate Change (Michael Wehner, LBNL)
Apr 7 (Thursday): Hierarchical Methods for the N-body problem
Apr 12 (Tuesday): (continued)
Apr 14 (Thursday): Accelerated Materials Design through High-Throughput First-Principles Calculations and Data Mining (Kristin Persson, LBNL)
Apr 19 (Tuesday): Dynamic Load Balancing
Apr 21 (Thursday): Fast Fourier Transform
Apr 26 (Tuesday): Big Bang, Big Data, Big Iron: High Performance Computing and the Cosmic Microwave Background (Julian Borrill, LBNL)
Apr 28 (Thursday): Big Data, Big Iron and the Future of HPC (Kathy Yelick, UCB and LBNL)