CS Division Graduate Course Announcements
Spring Semester 1996
In addition to regular offerings (CS 252, CS 263, CS 265, CS 271, CS 280,
CS 286, CS 287), the following courses will be offered next semester.
[Please note: 294-3 is cancelled. 268 WILL be offered]
[Please note: 294-8 is cancelled, please see CS 298-1]
CS 250: VLSI Systems Design (3)
Howard Sachs
CCN 24913 TuTh 11-1230 310 Soda
Howard Sachs, a visiting lecturer at Berkeley for the Spring semester,
is a former VP of Engineering at Cray and VP of Advanced Development
at Sun. The course will follow the catalogue description but for the
first time will use state-of-the-art industrial-strength CAD tools
from Cadence Design Systems.
CS 268: Computer networks (3)
Kevin Fall and Mike Luby
CCN 24922 Time and location TBA
This course will be taught jointly by Dr. Kevin Fall, a computer scientist at the LBL Network Research Group, and Prof. Mike Luby, Adjunct Associate Professor at Berkeley, leader of the ICSI theory group, and developer of the PET (Priority Encoded Transmission) scheme for robust network transmission.
This course is likely to be oversubscribed.
Possible alternatives are EECS 122 and Prof. Tse's EE 290Q course
"Advanced Topics in Communication Networks".
CS 269: Advanced Topics in Distributed Computing Systems (2)
Doug Terry
CCN 24923 Mon 1-3 373 Soda
This is the old CS 292J:
Prerequisites: 162; 262 recommended.
Building distributed computing systems, issues and techniques; communication
and computation, distributed data, identification of resources and their
distributed management, decentralized synchronization mechanisms, security and
protection, performance and modeling of distributed systems, programming
language and system support for distributed applications.
CS 294-1 Fuzzy Logic, Neural Networks and Soft Computing (2)
Lotfi Zadeh
CCN 24936 Mon 2-4 373 Soda
Prof. Zadeh's regular offering
CS 294-2: Topics in Graphics (3)
David Forsyth
CCN 24938 TuTh 1230-2 505 Soda
This class will cover a grab bag of topics in graphics,
linked by a loose theme; they're the techniques that I
think will become increasingly important for convincing
virtual environments:
- Modelling and displaying illumination in complex environments:
I'll cover radiosity methods, which are extremely effective in creating
realistic looking scenes, and then move on to deal with the effects of
smoke or water vapour, which absorb or scatter illumination.
In the process, we'll encounter a variety of numerical methods
and assorted hacks which can speed up solutions; finally, I'll discuss
why it's difficult to display realistic dusk or night-time scenes, and
talk about possible solutions.
- Visibility: Much of the work in radiosity (and almost every other aspect
of graphics) boils down to determining what can see what; for example,
cleverness about visibility is what makes games such as Doom run quickly.
I'll cover some recent, efficient object space visibility algorithms.
- Deformable models and spring-mass models: The finite-element machinery
that comes with radiosity is good for other things, too, and I'll cover
how to use it to build skin, cloth, and muscles.
- Autonomy: In games, entertainment systems and simulations it's
becoming increasingly important to provide other creatures that behave
in a plausible fashion. I'll cover some ways of building autonomous things
that can live and get around in virtual worlds. We'll explore the tension
between physically accurate, AI-respectable approaches and dubious but
good-looking hacks.
Of the 30 lectures available, I intend to spend about 5 lectures per
topic, leaving about 2.5 lectures per topic for paper presentations -
which will be organised around audience participation. The course
will be graded on two essays and piece of project work. It is my
intention that projects be organised so that the class will produce a
visually appealing virtual environment, with autonomous, deforming
things in it. To this end, I've obtained a high-end rendering engine
which will be devoted to class projects.
CS 294-3: Computer Arithmetic (3)
Israel Koren
CCN 24940 CANCELLED due to Prof Koren's unexpected return to the East Coast
CS 294-4: Intelligent DRAM (IRAM) (4)
Dave Patterson
CCN 24942 WF 200-330 373 Soda [Note the correct time]
Prerequisite: Any of CS 250, CS 252, CS 254, CS 262, CS 264, EE 225A, EE 241
Background:
Microprocessors and memories are made on distinct manufacturing lines,
yielding 10M transistor microprocessors and 256M transistor DRAMs. One
of the biggest performance challenge today is the speed mismatch
between the microprocessors and memory. To address this challenge, I
predict that over the next decade processors and memory will be merged
onto a single chip. Not only will this narrow or altogether remove the
processor-memory performance gap, it will have the following
additional benefits: provide an ideal building-block for parallel
processing, amortize the costs of fabrication lines, and better
utilize the phenomenal number of transistors that can be placed on a
single chip. Let's dub it an "IRAM", standing for Intelligent RAM,
since most of transistors on this merged chip will be devoted to memory.
Whereas current microprocessors rely on hundreds of wires to
connect to external memory chips, IRAMs will need no more than computer
network connections and a power plug. All input/output devices will be
linked to them via networks, as will be other IRAMs. If they need more
memory, they get more processing power as well, and vice versa--an
arrangement that will keep the memory capacity and processor speed in balance.
A single gigabit IRAM should have an internal memory bandwidth
of nearly 1000 gigabits per second (32K bits in 50 ns), a hundredfold
increase over the fastest computers today. Off-chip accesses will go over 1
gigabit per second serial links. Hence the fastest programs will keep most
memory accesses within a single IRAM, rewarding compact representations
of code and data.
Course:
This advanced graduate course re-examines the design of hardware and
software that is based on the traditional separation of the memory and
the processor. Without prior constraints of legacy architecture or
legacy software, the goal of the course is to lay the foundation for
IRAM; it could play the role that prior Berkeley courses did for RISC
and RAID. As in the past, this is a true EECS course which needs a
mixture of students with different backgrounds: IC design, computer
architecture, compilers, and operating systems. The ideal student will
have taken one of the prerequisites, enjoys learning from students in
other disciplines, shows initiative to help identify important
questions and sources of answers, and is excited by the opportunity to
shape the directions of a new technology where many issues are
cross-disciplinary and unresolved.
The first part of the course will consist of weekly readings with
round table discussions followed by a short lecture to bring people of all
backgrounds up to speed for the next topic. There will also be several guest
lectures followed by extensive questions and answers. Students will take
turns putting up the summary of the paper and conclusions from the
discussions and lectures on the course home page. In the last part of the
course we will break up into teams to work on related term projects, ideally
with an interim milestone to make sure that the project makes sense and to
make midcourse corrections in the projects. The end of the course will be a
series of presentations of the results and then a final lecture where we
determine our progress on DRAMs and what are the remaining steps and
most promising directions. The home page at the end of the course should
document our contributions to IRAM. There are no exams: grades are based
on class participation and on the term projects.
I expect the course and projects will answer questions such as:
- Are vector instructions needed to use IRAM bandwidth efficiently?
- Does current compiler technology allow replacement of traditional
multilevel data caches with scratch pad memories or vector registers?
(For example, Dick Sites has an Alpha address trace of a database that
breaks all known data caches: how well would the trace play on an IRAM?)
- How much bigger and slower is a microprocessor designed in a DRAM
process versus an IC process tuned to microprocessors? (For example,
what is the size and clock rate of a MIPS CPU designed in a straight DRAM
process?)
- What are the appropriate compiler optimizations when data bandwidth is
relatively cheap (due to IRAM) and instructions are relatively slow (due to
lower clock rates)?
- Does the power budget of a DRAM imply that the IRAM processor must
use low-power techniques? How does that impact IRAM performance?
- An alternative model is a new packaging technology ("flip chip"), that
promises thousands of wires between a processor chip and DRAM chip:
if we can get access to the full page mode buffer on a DRAM in a
single 8K bit transfer, do the architecture/software research issues remain
the same even if the hardware implementation is quite different?
- Current data structures allocate maximum sizes per data element: what is
the real size of data elements in a running program, and how often does size
change? (For example, what is the current data size vs. actual size of
SPEC95 programs?)
- How can compression, which is inherently variable, be combined with the
fixed-block architecture of IRAMs?
- Given the importance of compact code and data, what is the tradeoff
between segmented and fixed addressing?
- Can linked data structures be linearized on the fly to improve IRAM
performance?
- Are programs written in Java, which emphasizes code size and uses
garbage collection, a better match to IRAM than C, which ignores code size
and relies on malloc?
- Are programs written in Fortran 90, which offers array operations, better
for IRAMs than Fortran 77, which does not?
- Are gigabit serial lines sufficient to sufficient to satisfy the IRAM
demands on disk, networks, and displays? Do we need to stripe data across
these lines? How many lines do we need?
- What are the characteristics on an ideal operating system for an IRAM:
virtual memory, scheduling, protection, and so on?
- What applications are a good match to IRAM: digital signal processing,
systolic array apps, graphics? Which are a poor match to IRAM?
CS 294-5: Computer Aided Tools for Architectural Design and Construction (3)
Carlo Sequin
CCN 24944 MW 2-4 405 Soda
This second offering is a repetition of a successful experiment first
carried out in Spring 1995: Two different courses are taught concurrently
and jointly to study the architectural design process and
the methods and tools to support this process.
The core is the "Computer-Aided Design Methods" studio course (101)
taught by Prof. Yehuda Kalay in the Department of Architecture, CED,
at U.C. Berkeley. Students in this design studio will design one or
two buildings in a semester, using existing as well as newly developed
computer tools to support the interdisciplinary design process and the
communication among the participants. The architecture students will
focus on the design methods for buildings: they will play the role of
the architects and designers in the context of a major design such as
a new university building.
The focus of the graduate course CS 294-5 in Computer Science is to
help develop new modules for a flexible and extensible CAD system that
supports the whole design process. This includes the enhancement and
extension of data bases to capture not only the geometry of the
emerging design but a wide variety of design information from the
original wishes of the customer, through formal specifications for the
building program, conceptual solutions, problems arising and their
resolution, elevation and floor plans, and a 3D visualization of the
whole building in interactive walkthroughs. First, the CS students
will play the role of the customers and interact with the architects
to get their design specifications across. After a few weeks, the CS
students will develop prototype tools that will support the initial
crucial phases of the design process amd also make it easier to check
whether specifications and constriants have indeed been met.
Through their interaction, the participants of the different courses
will gain a better understanding of the complexity of designing a
building and will learn about the benefits and difficulties of
collaborative design and problem solving and how to use modern
computer and communications equipment to support these activities.
The course will involve reading some papers, studying and evaluating
one or two existing architectural CAD tools, and carrying out a
programming project to build a prototype tool or to enhance an
existing CAD system.
CS 294-6: [title TBA] (3)
Umesh Vazirani
CCN 24946 W 2-4 373 Soda
CS 294-7: Wireless Communications and Mobile Computing (3)
Randy Katz
CCN 24948 MWF 11-12 405 Soda
CONSENT OF INSTRUCTOR REQUIRED FOR REGISTRATION
Please see the course home page
CS 294-8: [title TBA] (3)
John Canny
CCN 24950 Th 3-4 320 Soda
CS 294-9: Connectionist and Neural Computation (3)
Jerry Feldman/Lokendra Shastri
CCN 24951 WF 2-330 320 Soda
Please see the course home page