CS 184: COMPUTER GRAPHICS
QUESTIONS OF THE DAY:
Suppose, some application program wants to display the line segment
between points ( 0, 0.125 ) and ( root(0.5), pi ),
-- how do you accomplish this ?
PREVIOUS
< - - - - > CS
184 HOME < - - - - > CURRENT
< - - - - > NEXT
Lecture #2 -- Mon 1/26/2009.
Discussion of the issues raised by the question above ...
(x,
y) coordinate pairs refer to a standard rectangualr viewport frame with
(0, 0) in the lower left corner and (1, 1) in the upper right corner.
If the line extends beyond the boundaries of the viewport it needs to
be clipped to stay within that rectangular frame. Then we just draw
that visible segment.
The endpoints of a specified line segment need NOT coincide with
integer pixel coordinates (i.e., the centers of the pixels).
Drawing (Visible) Line Segments
Standard Line-Drawing Abstractions (e.g., OpenGL API)
==>
Pre-coded routines to do the low-level work for you:
Moveto(Pt),
Lineto(Pt),
Begin PolyLine -- Pt -- Pt -- Pt -- ... -- Pt -- End PolyLine.
Abstractions are important ! (you may not know what display device your program will face).
You will use such routines in your first programming assignment,
A1.
A#1: How to draw lines interactively on a raster device.
You will be using a mouse to define vertices on the
screen,
which then get automatically connected by straight line segments.
A whole lot of detailed stuff will happen "under the hood" -- but
you don't have to worry about that!
This is what you will see: Demo of A1-applet.
With just a little effort you might obtain an outline drawing like this "evol1.gif" (Since this is Darwin year, I chose an "evolution" example).
In assignment A2, you will then interactively modify that contour, and perhaps obtain a shape like this "evol2.gif" (an "evolved" creature).
Then, as you change the interpolation coefficient between 0.0 and 1.0, you can see evolution in action: "evol3.gif".
To make a particularly neat demo, you should try to maintain
functionalities from shape_1 to shape_2, e.g. have the same vertices in
the two polgon B-rep chains represent the eye, the mouth, the tail ...,
and perhaps have two fins on the fish that turn into the legs of the
alligator. During the morph process, the creature may also move on the
screen, say from left to right.
Be creative ! (But not a creationist! :-)
Line-Drawing on Calligraphic Devices, e.g., Line-drawing
CRT:
==> Sweep the beam (pen) from A to B:
To do this sweep in a time interval dt:
-- move beam (or pen) to location (Ax, Ay) and turn beam on (put pen down on paper);
--
simultaneously increase x-position proportional to (Bx-Ax)/dt and
y-position proportional to (By-Ay)/dt, thus creating a slanted linear
motion;
-- turn beam off at location (Bx, By) (lift pen off paper).
Line-Drawing on a Raster Devices, driven by a Color
frame buffer:
==> Figure out which set of matrix
elements need to be turned on:
First, let's just assume a B/W raster display with say 800 by 600 pixels (= small dots that can be individually turned on/off).
Foil: How to draw a banana or a thin line ?
Foil: Which
PIXELS should be turned on ?
(E.g., Bresenham or DDA algorithm to turn on "the right" pixels efficiently. [Ch.3.5] )
Now, what else do we need to control a color raster device?
A more detailed look at a Frame
Buffer -- the heart of a Raster Device:
-- bit planes: depends on how many different colors you want to see simultaneously.
-- color map: input = # of bit planes; output = # of inputs to
all Digital-to-Analog-Converters (DACs),
-- DACs to set the 3 intensities for RGB
-- The 3 RGB intensity signals determine the color of one pixel
displayed on the CRT.
Abstractions are important everywhere in Graphics -- also for input / interaction devices!
These are some of the physical input devices that you may encounter:
Keyboard (input character codes),
Lightpen (absolute positioning on screen),
Tablet (absolute positioning on desktop),
Mouse (relative positioning),
Trackball (relative positioning),
Joystick (velocity control),
SpaceBall (6 DoF velocity control),
Polhemus Magnetic Wand (3D position input),
Data Glove (3D position and gesture input),
Haptic stylus,
Wii (Nintendo)
...
All these devices need to be supported with proper software.
To reduce that burden and to exploit the conceptual similarities of
many of these devices,
a few "logical input devices" are defined with corresponding
generally usable APIs.
One typically defines for each device a "measure" describing
the value(s) returned by the device,
and a "trigger" which is a signal that generates an event in
the graphics program.
PHIGS and GKS (two standard graphics frameworks) define six conceptual logical input devices:
- String: Get a string of text.
- Locator: Return the coordinates of a position on the
screen.
- Pick: Find the primitive(s) under the current cursor
location.
- Choice: Select one of several different options (e.g.,
function keys).
- Valuator: Return the value from a slider or dial.
- Stroke: Capture a polyline of many subsequent cursor
positions.
OpenGL works at a slightly lower level; many of the above functions
are synthesized
from basic position information and appropriate context-sensitive
software;
these pieces of the user interface, often associated with separate
small windows
(e.g., pull-down menues or sliders), are also called "widgets". Different relationships between the measure process and the trigger
event define different input modes:
- Request Mode: The measure is sent to the computer upon
request by a trigger activation.
- Sampling Mode: The measure is read immediately upon the
corresponding function call.
- Event Mode: Trigger actions place measure values on an
event queue. Callback functions respond to these events. (This is the preferred interaction mode for client-server systems.)
Object Representation for Computer Graphics and CAD
We need to distinguish between modeling and rendering.
In the modeling phase we create a virtual object or scene described in some suitable datastructure of file format.
In the rendering phase we create an image (projection, display) of this model from a particular
viewpoint; for that we need:
- a representation of
the scene (geometry, surface color, materials),
- a specification of a camera (observer,
viewer),
- a definition of the illumination
present (various lights).
In general, one wants to preserve the objects that one has
created
(e.g., your polygon of A#1).
How can you do that ?
-- Core dump from memory ?
-- Printing out all data structures ?
-- A terse minimal set of information that will permit to
recreate the polygon ?
It would be great if somebody else also could read your data and draw
your polygon ...
==> We need a standard polygon
interchange format !
This is also true for the more complicated objects that you will create
later in the course.
Over the last 24 years that I have taught this course we have used many different interchange formats:
- OSF (Object Specification Format) -- Specifically for CS
184, 1985.
- OFFOS (Our Format For Object Specification) -- 1987.
- SDL (Scene Description Language) -- 1990.
- GLIDE (Graphics Language for Interactive Dynamic
Environments) -- 1994.
- SLIDE (Scene Language for Interactive Dynamic
Environments) -- 1997.
This year we are going to use a rather simple format based on the
commercially used OBJ format, -- extending it as we need it, and thus
calling it OBJ*
The main features of any of these file formats are:
- compact, yet readable and editable format;
- easy to parse, and extensible syntax;
- geometry is contained in the vertex coordinates and in the instance placements;
- topology information is explicitly captured in connectivity between
vertices via edges and faces;
This is called a "boundary representation" or "B-rep" for
short,
(we omit discussion of hierarchy and time-dependence for today).
For Assignment #1 you only need to know the constructs: point and face.
This B-rep can easily be extended to 3-dimensional objects.
Reading Assignments:
Study: ( i.e., try to understand fully, so that you can answer
questions
on an exam):
Shirley, 2nd Ed: Ch 3.1-3.5.
Skim: ( to get a first idea of what will be discussed in the future,
try to remember the grand ideas, but no need to worry about the
details):
Shirley, 2nd Ed: Ch 3.6-3.8
Programming Assignments 1 and 2:
Both must be done individually.
Assignment #1 is due (electronically submitted) before Thursday 2/5, 11:00pm.
Assignment #2 is due (electronically submitted) before Thursday 2/12, 11:00pm.
PREVIOUS
< - - - - > CS
184 HOME < - - - - > CURRENT
< - - - - > NEXT
Page Editor: Carlo
H. Séquin