CS262A: Advanced Topics in Computer Systems 
Eric Brewer, based on notes by Joe Hellerstein

Volcano & the Exchange Operator

There are a host of techniques for parallelizing particular query operators (e.g. hash join, sorting, etc.), but what you really need is to parallelize your query engine in a clean, uniform way.

Volcano's Solution: encapsulate the parallelism in a query operator of its own, not in the QP infrastructure.

Overview: kinds of intra-query parallelism available:

We want to enable all of these -- including setup, teardown, and runtime logic -- in a clean encapsulated way.

The exchange operator: an operator you pop into any single-site dataflow graph as desired -- anonymous to the other operators.

Implementation:


Benefits of exchange:

  1. opaquely handles setup and teardown of clones (in an SMP...for shared-nothing, would need to have daemons at each site, and a protocol to request clone spawning)
  2. at the top of a local subplan, allows pipeline parallelism: turns iterator-based, unithreaded "pull" into network-based, cross-thread "push".


"Extensibility" features of Volcano and exchange:

There were a couple subsequent extensions to Exchange: Food for thought:

Eddies

Starting point: observes that a query optimizer is an adaptive system with a very slow feedback loop:
  1. Observe environment: daily/weekly (runstats)
  2. Use observations to choose behavior: query optimization
  3. Take action: query execution
There are reasons to believe this is way too slow. People have looked at more intelligent things (see survey article for more detail): Eddies were an effort to subsume a bunch of this stuff using the design spirit of Exchange: encapsulate the decisions in a dataflow operator. An eddy allows for adaptive reordering of a subtree of dataflow operators on a tuple by tuple basis (or slower, of course). Here's the idea: Note a vague similarity to INGRES' optimization scheme, which also could change join orders "per tuple" in some sense. At the architectural level, that's all fine and dandy. But many questions remain. Some basic ones: More complicated questions revolving around joins remain: Many of these problems were subsequently addressed by choosing a different granularity of dataflow operator. Instead of using eddies and joins, you expose the "state modules" ("STeMs") (hashtables, b-trees) from the join directly to the eddy -- in essence you expose the join algorithm's internals to the eddy. This idea led to: The end result of this was that we tore apart traditional relational query processing and optimization and reexamined it. However, we certainly did not put it back together (yet)! The set of new variables exposed introduces a bunch of complexity, and naturally reopens buried chestnuts like dealing with dependencies in data and predicates. Much remains to be done here! The question is relevance: one can come up with many scenarios where adaptivity helps a lot, but are any of them enough to rearchitect a DBMS?

My take: maybe not in the traditional DBMS market. Maybe in the brave new world of software dataflow for other tasks, e.g. network routing!