(Meditations at the Edge: Paper & Spirit, Peter and Donna Thomas.)

Jonathan's papers

Most recent:

Concise Machine Learning (lecture notes).
Spectral and Isoperimetric Graph Partitioning (survey).
Book! Delaunay Mesh Generation. (Short description below, long description on its own page).
Incremental Construction of Constrained Delaunay Triangulations.
Vertex Deletion for 3D Delaunay Triangulations.

Greatest personal satisfaction:

Delaunay Mesh Generation (book). (Short description below, long description on its own page).
Simulating Liquids with Lagrangian Meshes.
Constrained Delaunay Triangulations, I: Combinatorial Properties.
Isosurface Stuffing.
Guaranteed-Quality Anisotropic Mesh Generation.

My 2002 paper on 2D Delaunay Refinement was awarded the 2012 Test-of-Time Award by the journal Computational Geometry: Theory and Applications.


 

Surveys and Expository Papers

FAR AND AWAY MY MOST POPULAR PAPER is my introduction to the conjugate gradient method.
I have also written a (draft) survey article on spectral and isoperimetric graph partitioning, a survey article on mesh generation, and lecture notes for Berkeley's introductory class on machine learning.

Delaunay Mesh Generation

DELAUNAY REFINEMENT MESH GENERATION ALGORITHMS construct meshes of triangles or tetrahedra (“elements”) that are suitable for applications like interpolation, rendering, terrain databases, geographic information systems, and most demandingly, the solution of partial differential equations by the finite element method. Delaunay refinement algorithms operate by maintaining a Delaunay or constrained Delaunay triangulation which is refined by inserting additional vertices until the mesh meets constraints on element quality and size. These algorithms simultaneously offer theoretical bounds on element quality, edge lengths, and spatial grading of element sizes; topological and geometric fidelity to complicated domains, including curved domains with internal boundaries; and truly satisfying performance in practice.

It gives me great pleasure to announce my first book, co-authored with Siu-Wing Cheng of the Hong Kong University of Science and Technology and Tamal Dey of The Ohio State University.

The following papers include theoretical treatments of Delaunay refinement and discussions of the implementation details of my two-dimensional mesh generator and Delaunay triangulator, Triangle, and my three-dimensional mesh generator and Delaunay tetrahedralizer, Pyramid. See the Triangle page for information about what Triangle can do, or to obtain the C source code.

Non-Delaunay Mesh Generation, Dynamic Meshing, and Physically-Based Computer Animation

“I hate meshes. I cannot believe how hard this is. Geometry is hard.”
— David Baraff, Senior Research Scientist, Pixar Animation Studios
THE GREAT CHALLENGE OF TETRAHEDRAL MESH GENERATION is to create tetrahedra whose dihedral angles are never too small nor too large. Although Delaunay refinement usually does this well in practice, there's lots of room for improvement—in obtaining mathematically guaranteed bounds on angles, in further improving the angles beyond what Delaunay algorithms or guaranteed-quality algorithms can provide, and in maintaining the quality as a domain changes shape during a simulation. Our isosurface stuffing algorithm comes with theoretical guarantees on dihedral angles. Through mesh improvement procedures, we can make the tetrahedra even better in practice. We use the same mesh improvement methods as a basis for dynamic meshing algorithms for simulating objects undergoing radical changes in shape. We also use a simpler, faster form of dynamic meshing to simulate a surgical procedure in which a steerable needle is inserted into flesh.

Streaming Geometric Computation

A STREAMING COMPUTATION MAKES A SMALL NUMBER of sequential passes over a data file (ideally, one pass), and processes the data using a memory buffer whose size is a fraction of the stream length. Streaming allows us to compute Delaunay triangulations of billions of points on an ordinary laptop computer—and amazingly, to attain faster speeds than ordinary in-core triangulators. We also have streaming implementations of several standard GIS computations, such as converting Triangulated Irregular Networks (TINs, which are unstructured triangulations used to interpolate elevation fields) into Digital Elevation Maps (DEMs), and computing isolines. A major benefit of streaming is quick feedback. For example, a user can pipe the triangulator's output to our streaming isocontour extraction module, whose output is piped to a visualization module. Isocontours begin to appear within minutes or seconds, because streaming modules produce output while still consuming input. If they look wrong, the user can abort the pipeline and restart all the streaming components with different parameters. With other methods, users must wait hours for the computations to finish before glimpsing the results.

Finite Element Quality

IT IS NOT EASY TO FORMULATE the problem that a mesh generator is to solve. The natural first question is how to characterize good and bad triangles and tetrahedra based on their sizes and shapes. The answer to that question depends on the application. The universal concern is that the errors introduced by interpolation be as small as possible. In the finite element method, another concern (with different implications) is that the condition numbers of the stiffness matrices be small. Forty-odd years after the invention of the finite element method, our understanding of the relationship between mesh geometry, numerical accuracy, and stiffness matrix conditioning remains incomplete, especially in anisotropic cases. The following papers examine these issues for linear elements, and present error bounds and quality measures to help guide numerical analysts, researchers in triangulation and mesh generation, and application writers in graphics and geographic information systems.

Constrained Delaunay Triangulations

THE CONSTRAINED DELAUNAY TRIANGULATION (CDT) is a fundamental two-dimensional geometric structure with applications in interpolation, rendering, geography, and mesh generation. The most commonly implemented algorithm for constructing a CDT begins with an ordinary Delaunay triangulation and inserts constraining edges, often called segments, one by one. One of our papers shows that segments can be inserted in linear time, and investigates the expected running time of inserting segments in random order.

Prior to my work below, the CDT had not been generalized to higher dimensions, and it can never be fully generalized because not every polyhedron has a constrained tetrahedralization (allowing no additional vertices). Here, however, I prove that there is an easily tested condition that guarantees that a polyhedron (or piecewise linear domain) in three or more dimensions does have a constrained Delaunay triangulation. (A domain that satisfies the condition is said to be edge-protected in three dimenions, or ridge-protected in general dimensions.)

Suppose you want to tetrahedralize a three-dimensional domain. The result implies that if you insert enough extra vertices on the boundary of a polygon to recover its edges in a Delaunay tetrahedralization (in other words, if you make it be edge-protected) then you can recover the polygon's interior for free—that is, you can force the triangular faces of the tetrahedralization to conform to the polygon without inserting yet more vertices. This method of polygon recovery is immediately useful for mesh generation or the interpolation of discontinuous functions. (The result also fills a theoretical hole in my dissertation by showing that it is safe to delete a vertex from a constrained Delaunay tetrahedralization in the circumstances where my “diametral lens” algorithm does so.)

I provide two algorithms for constructing general-dimensional constrained Delaunay triangulations that are fast enough to be useful in practice. One is based on bistellar flips (which swap a few tetrahedra for a few others), and one is a sweep algorithm. The flip algorithm is easier to implement, and is probably usually faster in practice. However, the sweep algorithm works on almost every input that has a CDT, whereas the flip algorithm works only on ridge-protected inputs. The question of which algorithm is asymptotically faster is tricky—the answer depends on the size of the output, and is different for a worst-case input than for a random input; see the flip algorithm paper for details. See the “Strange Complexity” paper to find out why the sweep algorithm doesn't work on every input that has a CDT.

Surface Reconstruction

WE WISH TO RECONSTRUCT CLOSED SURFACES from simple geometric inputs like sets of points, sampled from the surface of a three-dimensional object using a laser range finder, or sets of polygons, which are often used as crude geometric models for rendering. However, the input data are rarely well-behaved. Laser range finders are imperfect physical devices that introduce random errors (noise) into the point coordinates, and often introduce points that don't lie anywhere near the surface (outliers) as well. Moreover, range finders can't usually scan an entire object's surface—portions of the surface are left undersampled or unsampled. Polygon inputs used for rendering are rarely topologically consistent—the polygons often have spurious intersections or leave small gaps in what should be a closed surface. For either type of input, we wish to generate a watertight surface that approximates the surface suggested by the data.

Geometric Robustness

GEOMETRIC PROGRAMS ARE SURPRISINGLY SUSCEPTIBLE to failure because of floating-point roundoff error. Robustness problems can be solved by using exact arithmetic, at the cost of reducing program speed by a factor of ten or more. Here, I describe a strategy for computing correct answers quickly when the inputs are floating-point values. (Much other research has dealt with the problem for integer inputs, which are less convenient for users but more tractable for robustness researchers.)

To make robust geometric tests fast, I propose two new techniques (which can also be applied to other problems of numerical accuracy). First, I develop and prove the correctness of software-level algorithms for arbitrary precision floating-point arithmetic. These algorithms are refinements (especially with regard to speed) of algorithms suggested by Douglas Priest, and are roughly five times faster than the best available competing method when values of small or intermediate precision (hundreds or thousands of bits) are used. Second, I show how simple expressions (whose only operations are addition, subtraction, and multiplication) can be computed adaptively, trading off accuracy and speed as necessary to satisfy an error bound as quickly as possible. (This technique is probably applicable to any exact arithmetic scheme.) I apply these ideas to build fast, correct orientation and incircle tests in two and three dimensions, and to make robust the implementations of two- and three-dimensional Delaunay triangulation in Triangle and Pyramid. Detailed measurements show that in most circumstances, these programs run nearly as quickly when using my adaptive predicates as they do using nonrobust predicates.

See my Robust Predicates page for more information about this research, or to obtain C source code for exact floating-point addition and multiplication and the robust geometric predicates.

The Quake Project

PAPERS ABOUT THE QUAKE PROJECT, a multidisciplinary Grand Challenge Application Group studying ground motion in large basins during strong earthquakes, with the goal of characterizing the seismic response of the Los Angeles basin. The Quake Project is a joint effort between the departments of Computer Science and Civil and Environmental Engineering at Carnegie Mellon, the Southern California Earthquake Center, and the National University of Mexico. We've created some of the largest unstructured finite element simulations ever carried out; in particular, the papers below describe a simulation of ground motion during an aftershock of the 1994 Northridge Earthquake.
OUR SECRET TO PRODUCING such huge unstructured simulations? With the collaboration of David O'Hallaron, I've written Archimedes, a chain of tools for automating the construction of general-purpose finite element simulations on parallel computers. In addition to the mesh generators Triangle and Pyramid discussed above, Archimedes includes Slice, a mesh partitioner based on geometric recursive bisection; Parcel, which performs the surprisingly jumbled task of computing communication schedules and reordering partitioned mesh data into a format a parallel simulation can use; and Author, which generates parallel C code from high-level machine-independent programs (which are currently written by the civil engineers in our group). Archimedes has made it possible for the Quake Project to weather four consecutive changes in parallel architecture without missing a beat. The most recent information about Archimedes is contained in the Quake papers listed above. See also the Archimedes page.
SEVERAL PAPERS ON THE COMPUTATION AND COMMUNICATION DEMANDS of the Quake Project's parallel finite element simulations, and what requirements such unstructured simulations will place on future parallel machines and networks. For designers of computer architecture who want to better understand unstructured applications.

Route Planning

HERE, WE APPLY CONFORMING DELAUNAY TRIANGULATIONS to the AI problem of route planning on real-world maps. The problem is inherently “dirty,” with difficulties ranging from one-way streets to incorrect maps, so it is not straightforward even to formally specify a problem to solve. Our approach is to use an analogical planner, which uses past experiences to help choose the best result for future travel. However, this case-based reasoning approach to planning requires a similarity metric to decide which previous cases are most appropriate for the current problem. Our similarity metric begins by describing a geometric problem that roughly approximates the problem of discovering good previous cases, then we solve the geometric problem with a combination of geometric theory and heuristics. The solution of the geometric problem is then cast into an approximate solution to the planning problem, and the rough edges are smoothed by brute-force symbolic planning. This procedure proves to be faster than brute-force symbolic planning from scratch.