In May, 2010, the University of Southern California's Biomedical Simulations Resource sponsored a one-day Focused Study Group to discuss the future rolls of multi-elecrode ensemble studies in brain research. Not being involved in such studies myself, I presumed that my job would be to provide some philosophical underpinnings for ensemble studies. I believed this might be especially important in the current era of deep reductionism in biological science. The three headings below were proposed by the discussion leader, Vasilis Marmarelis, for part of a white paper to be generated by the Group's participants. Beneath each heading are my inputs-- designed not as final statements of a philosophical point of view, but, in the spirit of the white paper, as starting points for further discussion.

Appendix 1: Reductionism versus Systems Integration in the physical and life sciences

Perhaps it would be most illuminating to begin a discussion of reductionism with classical geometry, the model of deductive power that inspired Greek philosophers—especially Plato and Aristotle, well before Euclid formalized it in his Elements. But it is Euclid’s formalization that provides the nearly complete model for us (lacking just a few elements of modern real analysis) for discussion here. Euclid constructed his geometry in layers. The first layer comprises the definitions, postulates, and common notions of Book 1. From these, in Book 1, he deduces (proves by deduction) an ordered sequence of 48 propositions, each of which is deduced from those that came before. In other words, each proved proposition becomes an axiom in the proofs of subsequent propositions in the sequence. Attempting to understand higher-layer propositions directly in terms of the basic elements (definitions, postulates, and common notions) of Book 1 is not instructive. This may have more to do with the nature of humankind than with any absolute principle. It is clear that the deductive power of human working memory is powerfully leveraged by the process commonly labeled chunking, and it is equally clear that the chunking process is layered, or hierarchical. Humans attach names (labels) to complicated phenomena or processes, then manipulate those labels in working memory as they contemplate higher-level systems comprising those phenomena or processes. As they develop the perception of having understood those higher-level systems, they give each of them a label, and so forth. An illustrative example of higher-order thinking would be the inner-ear physiologist contemplating the action of cochlear filters in terms of convolutions of filter kernels and input waveforms. The concepts convolution, filter kernel, and input waveform clearly are high level, each involving multi-layered systems of lower-level concepts and definitions. Such contemplations, in terms of a few labels, fit well within Miller’s well-known “seven plus or minus two” constraint on working memory -- where our perception of understanding presumably arises. Although we are able to relax the “seven plus or minus two” constraint by using external graphical devices such as chalkboards, it seems that the perception of understanding itself is developed internally—within the “seven plus or minus two” constraint, and then transferred to episodic memory.

It seems clear that the deductive scientific methods of Aristotle and Descartes were inspired by the Elements, Descartes by Euclid’s version, Aristotle by an earlier version (perhaps that of Hippocrates of Kos). And in each of those methods, ultimate understanding would be achieved by illumination of the very first principles, analogous to the definitions, postulates, and common notions of Euclid’s Book 1. The deductive scientific method would begin, ultimately, with those. But at the dawn of the Western Enlightenment, a new approach emerged, inspired by the likes of Gilbert, Harvey and Galileo—the great empiricists, and described by Bacon in his replacement for Aristotle’s Organon. For the science of nature (as opposed, for example, to the science of mathematics), the New Organon prescribes observation on nature itself, followed by inductive reasoning, as the route to understanding.

With considerable stimulation and inspiration from colleagues, especially Hooke, Newton wedded these two scientific methods into a two-layered approach -- applying observation and induction to establish putative axioms (the laws of motion and the law of gravity) at the first layer, from which propositions (Kepler’s laws) could be deduced at the second layer. For modern natural science, the understanding achieved by Newton’s approach was monumental. It was, most arguably, the inspiration for the Enlightenment that followed. It was the answer to Yali’s question for Jared Diamond and to the puzzlement of Joseph Needham over China’s failure to stay ahead of the West technologically during the 18th, 19th and 20th centuries. And it is the model followed by physical sciences since Newton’s time. Most importantly, it eliminates the need for reduction to ultimate first principles. The Baconian side -- observation and induction, allows us to enter the natural world at any level of complexity at which we can make observations.

One might think of Newton’s two-layer approach as the New Reductionism. Instructive examples in physics are given by the work of Rayleigh on acoustics and the work of Cohen on the properties of solid materials. By deduction, Rayleigh showed that local elastic properties combined with local inertia in solids would lead to spatially extensive propagation of waves (acoustic waves) in those materials. Cohen showed that the elastic moduli and the densities of solids could be deduced through quantum mechanics from the structures of their constituent atoms. Thus we have three layers, taken two at a time. Each of these two-layer results gives us a perception of understanding. And each gives us powerful examples of Bacon’s ideal: “To learn about nature, look to nature itself, not to books. Then put what you have learned to use for the betterment of humankind.” Rayleigh’s work allows us to design better acoustic structures -- for improved listening and for many other purposes. Cohen’s work allows us to create designer solids -- with desired elastic moduli and the like. On the other hand, Cohen’s method does not allow us to predict elastic moduli or densities as precisely as we can measure them. The prudent acoustician, then, would stay with Rayleigh’s two layers and use measurements (or tables derived from measurements) rather than Cohen’s computations to reveal elastic moduli and densities. From the measured values, reliable estimates of the acoustic properties (e.g., speed of propagation, acoustic impedance) can be computed. The two-layer approach gives not only the most intuitive understanding for the basic scientist, but also the most reliable results for the applied scientist or engineer working to fulfill Bacon’s ideal.

It is our contention that ability to observe ensemble activity, appropriately applied, will gain us access to a fairly high level of complexity in the nervous system. In order to expand those observations into the two-layer approach of the new reductionism, we must develop deductive tools and methods to apply to our observations of ensemble activity.


Appendix 2: Relation between LFPs and spike recordings

There are numerous non-spiking inter-neurons in the brain, and one might argue that slow or graded potentials are important elements of the computational processes of the brain. This was the argument made by Bullock in his classic paper “Neuron Doctrine and Electrophysiology” and in the sequel “The Neuron Doctrine, Redux” nearly fifty years later. Bullock’s argument in that first paper had been based on extensive observations, by several observers, of the intracellular activities in cells of the nine-neuron cardiac ganglion of Homarus americanus. In its day, neuron for neuron, it was arguably the most thoroughly understood ganglion on earth. The fact that its cells were linked by electrical junctions as well as chemical synapses made it irreducible, in a formal sense, and thus not completely addressable through reductionist analysis. The quantitative details of its properties would be determined by all of the elements of all of its cells; none would be determined by a proper subset of those elements. Engineers are familiar with this problem -- frequently labeled the problem of loading, and they solve it by introducing buffered boundaries (e.g., large impedance mismatches or nearly unidirectional transduction processes) between elements. To simplify two-layer reductionist analysis in neural systems, one needs to locate natural buffers (with processes that isolate the source of activity from the load). Two candidates are chemical synapses and spiking axons. To the extent that the state of the post-synaptic cell does not feed back directly though the synapse, the synapse buffers the sending cell from its load (the post-synaptic cell). To the extent that the states of the axon’s target do not feed back through the axon to affect the states of its proximal regions and cell body of origin, the axon buffers the sending cell from its target. The boundaries of elements at the lower layer of a two-layer analysis would be determined by the presence of putative buffered boundaries at their places of input and output. If, like those of the lobster cardiac ganglion, the inter-neurons of the brain are coupled by unbuffered junctions, then the most propitious place to begin studies of ensemble activity would seem to be axon bundles (e.g., white matter).


Appendix 3: The demarcation problem and the formulation/validation of scientific hypotheses

In geometry and in mathematics in general, first principles (e.g., the definitions, postulates and common notions of Euclid’s Book 1) are taken to be given or true by definition. In the two-layer approach to reductionism in natural science, on the other hand, modern philosophers of science generally agree that the descriptions of material properties and behavior (i.e., the descriptive models, hypotheses or laws) postulated for each of the two layers must be derived from observation (empirical evidence) and, therefore, be contingent (neither true nor false by pure logic or definition alone). Furthermore, they generally agree that these descriptions (frequently labeled descriptive hypotheses) must be testable with doable experiments or observations. These are the three quintessential features of descriptive hypotheses in natural science: (1) basis in empirical evidence, (2) contingency, and (3) testability.

Modern philosophers of science generally identify a second kind of hypothesis, the explanatory hypothesis, a synthesis of descriptive hypotheses at the lower layer that will putatively explain one or more descriptive hypotheses (representing properties or behaviors) at the higher layer. The explanatory hypothesis is tested by deductive analysis. In the manner of Newton, this analysis begins with the relevant descriptive hypotheses at the lower layer being raised to the level of axioms. Although they may play the same role in this analysis as propositions or theorems do in the mathematical sciences, the descriptions of properties or behaviors at the higher layer stand on their own empirical bases and are not contingent on proof of derivability from properties and behaviors at the lower layer. In other words, analysis tests the explanatory hypothesis, not the higher-layer descriptive hypotheses (for astronomers, Kepler’s laws would stand on their own, without Newton’s synthesis). Furthermore, passing the test does not guarantee validity of the explanatory hypothesis; one still faces the specter of affirmation of the consequent. And this is where Ockham’s razor comes in.

Thus we have a quintessential difference between the natural sciences and the mathematical sciences. There is at least one more: An axiom in the mathematical sciences need not have any of the three features required of a descriptive hypothesis in natural science. By widespread conventional practice, there seem to be other differences -- descriptive models in natural science need not be parametric; and deductive analysis in natural science need not have the formal structure required of it in the mathematical sciences. Instead, it may take the form of simulation, employing either parametric or nonparametric models. In whatever form it takes, affirmation of the consequent remains its major pitfall.