Recently offered by Luca Trevisan in

**About this course**: Computational Complexity theory looks
at the computational resources (time, memory, communication, ...)
needed
to solve computational problems that we care about, and it is
especially
concerned with the distinction between "tractable" problems, that we
can
solve with reasonable amount of resources, and "intractable" problems,
that are beyond the power of existing, or conceivable, computers. It
also
looks at the trade-offs and relationships between different "modes" of
computation (what if we use randomness, what if we are happy with
approximate,
rather than exact, solutions, what if we are happy with a program that
works only for most possible inputs, rather than being universally
correct,
and so on).

This course will roughly be divided into two parts: we start with "basic" and "classical" material about time, space, P versus NP, polynomial hierarchy and so on, including moderately modern and advanced material, such as the power of randomized algorithm, the complexity of counting problems, the average-case complexity of problems, and interactive protocols. In the second part, we focus on more research oriented material, chosen among PCP and hardness of approximation; circuit, proof complexity, and communication lower bounds; and derandomization, average-case complexity and extractors.

There are at least two goals to this course. One is to demonstrate the surprising connections between computational problems that can be discovered by thinking abstractly about computations: this includes relations between learning theory and average-case complexity, the Nisan-Wigderson approach to turn intractability results into algorithms, the connection, exploited in PCP theory, between efficiency of proof-checking and complexity of approximation, and so on. The other goal is to use complexity theory as an "excuse" to learn about several tools of broad applicability in computer science. Depending on how far we go, we see enough Fourier analysis and learning to know how to learn decision trees with membership queries, enough graph theory to build constant-degree expander graphs from scratch and to have an understanding of why spectral partitioning algorithms work, and enough algorithmic coding theory to know how to decode Reed-Solomon codes.

Lecture notes are the main references

- 2008 Notes will appear weekly on the class page
- 2004 Notes are currently being revised (the original version is on the class page)
- 2002 notes (171 pages) [ps] [pdf]
- 2001 notes (148 pages) [ps] [pdf]

Two very good new textbooks are coming out soon, and preliminary versions are freely available on the web

- Sanjeev Arora and Boaz Barak.
*Complexity Theory: A Modern Approach* - Oded Goldreich.
*Computational Complexity: A Conceptual Perspective*

It is also good to have a copy of

- C.H. Papadimitriou.
*Computational Complexity*. Addison-Wesley, 1994.