Saccular physiology from the outside I

Saccular physiology from the outside II

Home

Research

Bob Capranica

NIH Report

Africa

Arizona

Why the amphibian Sacculus!

 

The Bullfrog Sacculus



The amphibian sacculus provides a thread that is woven through the entire history of the Lewis Lab’s involvement with the vertebrate ear. Following that thread here gives me the chance to reminisce about faculty colleagues and student colleagues over the years.


Tom Everhart and Josh Zeevi

Among the people who greeted me when I arrived at Berkeley in the summer of 1967 were two prospective doctoral students— Yehoshua Y. Zeevi and Michael J. Murray, and a prospective faculty mentor and collaborator—Thomas E. Everhart. While at Cambridge University, Tom had invented the secondary-emission mode of image formation for scanning electron microscopy (SEM) and now, at Berkeley, had one of the very first scanning electron microscopes produced by Cambridge Instruments Ltd. Josh Zeevi and I joined Tom to develop preparation techniques that would allow us to use the microscope for studies of neural tissues. Mike Murray, in the meanwhile, developed his own, separate project—which soon evolved into a magnificent neuroethological study of a predatory sea slug, Navanax inermis. The study’s magnificence derived from its anatomical, physiological and behavioral thoroughness, and from its pioneering application of reverse engineering to a very complex, complete neural subsystem (from sensory input elements to motor output elements).

Tom, Josh and I had considerable success in our SEM adventures, and soon were viewing neural microstructures in ways they had never been seen before. Josh incorporated some of the results in his doctoral dissertation—which was a beautiful theoretical study of the potential for signal processing at axonal branches of neurons. Tom Everhart and I continued our close collaboration until he left Berkeley in 1979 to become Dean of Engineering at Cornell. For approximately ten years, major parts of the Lewis Lab and the Everhart Lab were essentially merged, under support of a program project grant from NIH. I was most fortunate to have access to Tom’s microscope and to have Tom himself as a friend and mentor as I began my academic career.

Neuronal terminals

Frank Werblin, Paula Klein and Eric Lombard

In 1969, Frank Werblin, then a postdoc in Physiology, visited the Lewis Lab and suggested using SEM to view vertebrate photoreceptors and, perhaps, other ciliated sensory receptors in vertebrates. Frank was working on a large salamander—the mudpuppy (Necturus maculosus) and joined us in adapting our preparation techniques to the mudpuppy retina. We soon were viewing rods and cones as they had not been seen before. At nearly the same time, at the suggestion of Ted Bullock, Paula Klein had joined the Lewis Lab, temporarily, to learn about SEM of neural tissues and then take what she learned back to UC San Diego. Paula already was skilled at preparing tissue for the transmission electron microscope (TEM), so she was a valuable addition to our team. Following Frank Werblin’s suggestion, Paula and I decided to look at the ciliated receptors in the vertebrate ear. Frank provided the vertebrate for us, and Paula and I began exploring the mudpuppy inner ear.

Rods and cones of the mudpuppy retina

Our first attempt left just one sensory surface intact. We weren’t even sure which surface it was—- saccular, utricular, lagenar? Undoubtedly the best person in the World to identify it for us was Eric Lombard, and he happened to be at Berkeley, working in David Wake’s laboratory. Eric told us that we were looking at the sensory surface of the sacculus— the saccular macula. Paula and I spent much of the next several weeks at the campus library, giving ourselves a crash course on saccular morphology and physiology. As our preparation techniques improved, we began to view amphibian inner-ear hair cells and their associated structures in ways that they had not been seen before. SEM was good at providing an integrated overview of the entire macula— one not available at all with other methods of the day. By the time we published our results in the archival literature (1972), Paula had married Michael Nemanic, a graduate student at Berkeley. Finishing her doctoral work in the Eakin Lab at Berkeley, she never did return to UC San Diego.

Mudpuppy hair bundle

Dean Hillman

At a small administrative meeting in Bethesda, I told Rodolfo Llinas about the work Paula and I were doing, and he suggested a collaboration with his colleague, Dean Hillman. Dean had done extensive morphological studies of the American bullfrog inner ear with transmission electron microscopy. Dean introduced the Lewis Lab to the bullfrog. He also brought valuable suggestions for improving our tissue preparation techniques, which Paula and I were happy to apply. Working together on the bullfrog sacculus, Dean and I were able to corroborate his previous TEM results with our SEM results. Among other things, Dean’s TEM studies had shown a conspicuous bulb at the tip of each hair bundle’s kinocilium, and had provided compelling evidence that the bulb was the site of mechanical linkage between the hair bundle and the acellular mechanical circuit of the inner ear. SEM proved to be an excellent tool for viewing the bulbs and their relationships to other structures.

Diagram of bullfrog inner ear
Saccular otoconial mass
SEM of the bullfrog saccular macula
SEM of bullfrog saccular hair bundles (with bulbs)

Dean had proposed that transduction from mechanical signal in that circuit to neuroelectric signal in the hair cell was achieved by deformation of a soft spot known as the cuticular notch. He repeated this hypothesis in the Science paper resulting from our collaboration. Studies on bullfrog saccular hair cells by Hudspeth and Cory at Cal Tech soon would show that the site of transduction in those hairs was somewhere else—near the tips of the hairs (the stereocilia or stereovilli of the hair bundle). Now, thirty-five years later, Dean’s hypothesis has been revived—in a clever and interesting paper by Andrew Bell (ANU, Canberra). The title of Bell’s paper is “Detection without deflection? A hypothesis for direct sensing of sound pressure by hair cells” (J Biosci 32(2): 385-404, 2007).


Cheuk Li

Paula and I had noticed that the hair bundles along the perimeter of the mudpuppy saccular macula were strikingly different from those over the rest of the macula. The former resembled those of the utricular macula. The latter resembled those of another sensor—the amphibian papilla. In his TEM studies of the bullfrog inner ear, Dean Hillman also had noticed that the shapes of hair bundles from the central saccular region were very different from those of the utriculus, and that they both were very different from those of the semicircular canals. Thinking that hair-bundle mechanics must be important to the sensing function of the ear, and that the mechanical properties of a hair bundle must depend on the bundle’s shape, I took this to be a promising situation for reverse engineering. In pursuing it I chose to focus on the bullfrog. Dean and many others already had established that species as a standard subject for inner-ear studies. Among those others was Robert Capranica, whom we had tried to recruit for the Berkeley EECS Department. He had taken a position at Cornell instead, and at this point was serving as post-doctoral mentor for Mike Murray, who had finished his studies at Berkeley on Navanax. Bob Capranica also had been Mike’s undergraduate thesis advisor at MIT.

Distribution of predominant saccular hair bundle types in the American bullfrog

I decided to use SEM to classify the hair-bundle geometries of the bullfrog ear, and to map the distributions of those bundles over the various sensory surfaces. This would be a useful first step in the reverse-engineering task. To help me, I enlisted a new doctoral student, Cheuk-wa Li. We found that, like that of the mudpuppy, the entire perimeter of the bullfrog saccular macula was populated by what appeared to be miniature utricular hair bundles. The difference, in this case, was punctuated by the fact that—like typical bullfrog utricular hair bundles, the bundles of the saccular perimeter lacked Dean Hillman’s bulbs. One of the first questions we asked was “how did this pattern arise during the development and growth of the macula?” To answer this, we examined bullfrogs ranging from very small tadpoles to very large adults. It soon became clear to us that the perimeter hair bundles were a juvenile type. As the macula grew, these juvenile bundles were transformed into the mature bundle type of the central macula, complete with bulb; and the perimeter hair cells themselves seemed to be replaced largely by transformation of young epithelial cells. The developmental sequence that we worked out, with clear microscopic evidence of each stage, still stands. We found that the process continues conspicuously in the adult frog. As long as the frog grows, the saccular macula grows—and adds hair cells. Study of hair-cell development in adult vertebrates may have implications for hair-cell regeneration in human patients, and has grown into a very large effort across the otolaryngology research community. Our 1973 paper on hair-cell morphogenesis in the saccule was the first of its kind—at the very leading edge of this wave.

Hair bundles undergoing transformation

Our next step was to use SEM to classify the hair bundles of the bullfrog ear, beyond the sacculus, in even finer detail, and to map the various hair bundle types over the organs of the inner ear. The functions of the vertebrate inner ear are considered to be divided between acoustic sensing and vestibular sensing (sensing of head orientation and motion). In the frog ear, acoustic sensitivity was known to arise in four organs—the basilar papilla, the amphibian papilla, the sacculus and the lagena. The lagena was known also to serve vestibular function, and many believed that was true of the sacculus as well. Except for a very few bundles scattered along the midline of the utricular macula, Cheuk and I found that the bundles with Dean Hillman’s bulbs were limited to the four organs with acoustic function. Except for a row of juvenile, perimeter hair bundles, the bundles of the amphibian papilla all had bulbs. All of the bundles connecting to the basilar-papilla tectorium had bulbs, and all of the bundles in the two centermost rows in the lagenar macula had bulbs. The obvious conclusion--- Hillman’s bulbs are adapted to acoustic function. If that were true, then (except for a few, scattered unbulbed hair cells, like those at the perimeter) the entire saccular macula would be adapted to acoustic function, as would be the two centermost rows of the lagena. How could we corroborate that conclusion?

The 1975 paper that Cheuk and I published on hair-bundle types and distributions was the first of its kind, and it was followed soon by similar studies, from other laboratories, on other vertebrates. It demonstrated the power of SEM to map widespread geometric relationships without the need for tedious and error-prone reconstruction from serial sections. Cheuk went on to a dissertation project on hair-cell development in tadpole lateral line. To guide him in groundbreaking experiments, which involved culturing of denervated tadpole tail tissue, Cheuk enlisted Howard Bern to be co-mentor on his project.


Andy Szeto and Chris Platt

With detailed maps of hair-bundle distributions now available for the bullfrog ear, the next step for the Lewis Lab would be physiological— attempting to produce precise functional overlays for those maps. Among other things, that would be the way we could reject or corroborate our putative conclusion about the distribution of the acoustic sense in the bullfrog ear. Ionic dyes, such as Procion Yellow, had been used to trace axons and dendrites of functionally-identified neurons in invertebrate animals. We proposed to do the same thing with the afferent neurons that connected to the hair cells of the frog ear. One NIH site-visit committee declared the proposal absolutely infeasible. It required penetrating those neurons with the submicroscopic tips of glass micro-pipette electrodes filled with the dye, using standard sensory-neurophysiological procedures to identify the neuron’s function, then iontophoretically injecting the dye into the neuron. This is what had been done in the invertebrate neurons. Why wouldn’t it work in the vertebrate ear?

The Lewis Lab's first attempts at getting dye molecules into inner-ear neurons involved passive uptake rather than iontophoretic injection. They were made by Andrew Szeto, a graduate student in EECS, with guidance from several sources, including Christopher Platt-- a Lewis-Lab post-doc who was using SEM to map hair-bundle types in fish. Andy found that the dye molecules were not sufficiently mobile, once inside the cell. Nonetheless, he took the first step in the Lewis Lab's new direction, exploring the putative relationships between hair-bundle morphology and function. After completing an MS thesis on that topic, Andy earned a doctoral degree in Biomedical Engineering at UCLA, and became a leader in that field in Southern California.

It would be about two years before a molecule with the necessary intracellular mobility was available. Its specifications were drawn by consensus at a meeting of nerobiologists in Iowa, and it was engineered by Walter W. Stewart at NIH. Although several of us from the Lewis Lab attended that meeting, and subsequently knew of Dr. Stewart's ionic dye molecule (Lucifer Yellow), it was a visit to the laboratory of Chris Platt (by then at USC) that convinced us to commit to making it work in the bullfrog ear. Chris already was seeing promising results with it in his fish preparation. Another promising molecule was horseradish peroxidase (HRP). The use of HRP requires very special precautions, so it had not been available for Andy.


Richard Baird, Ellen Leverenz and Hironori Koyama

I sat down with two graduate students, Richard Baird and Ellen Leverenz to plan a strategy. We would commit to both Lucifer Yellow and HRP, adapting the chemistry and microscopy wings of the lab to both. I set aside a bottle of wine to open for all of us when we achieved success. Richard would attack the vestibular side of the ear—utriculus and lagena, as well as the sacculus. He would use HRP. Ellen and I would attack the auditory part of the ear—basilar papilla and amphibian papilla, as well as the sacculus. We would use Lucifer Yellow. Richard had inherited an excellent vestibular physiological setup from two earlier students, Robert Plantz and Michael Hassul. He adapted this to the bullfrog. Ellen and I put together apparatus for open-field auditory stimuli. Vibratory stimulation for saccular or lagenar axons would be generated by finger-tapping. We worked intensively for many months, until gradually we realized that we were having successes, and that those successes were becoming routine. The wine bottle remained corked-- it had happened gradually. There had been no Eureka moment. Comparing results, we decided that among the pigmented cells of the frog inner ear, it was easier to follow Lucifer Yellow than HRP. For the vestibular neurons, Richard switched to Lucifer Yellow and was on his way. For auditory stimulation, Ellen and I wanted to employ the more conventional, closed-field auditory method. For that we needed help. It was provided by Bob Capranica and Andrea Megela, who not only taught us what we needed to know, but joined us in tracing our first few auditory neurons identified with closed-field. This was accomplished during a three-week visit by Ellen and me to the Capranica Lab.

Richard Baird
A basilar-papillar axon filled with Lucifer Yellow
Another BP axon filled with Lucifer Yellow

This left the sacculus. Finger tapping was not the stimulus of choice. Richard, Ellen and I studied a B&K catalog to identify the equipment we would need to quantify and control the vibratory stimulus. It included calibrated piezoelectric accelerometers, charge amplifiers, electromagnetic drivers, a controllable signal source and a low-frequency power amplifier—all very expensive. With our successes from the Lucifer Yellow work in hand, the three of us put together a proposal to NSF for support for the saccule project. It was funded quickly and we began to construct an appropriate setup. Richard had noticed that, in our regular setups, saccular neurons exhibited a strong tendency to produce spikes on 16.666... ms intervals, corresponding to 60 Hz. The amplitudes of 60-Hz vibrations in our setups were very low, but more than enough to excite saccular neurons. Isolation from those, and other low-frequency vibrations (including building resonances in the 9-16 Hz range) required extraordinary measures.

By this time, we were joined by Hironori Koyama, a postdoctoral student. The four of us constructed a compound vibration-isolation system. The stimulus apparatus and frog were mounted on a vibration filter comprising two mass-spring-damper stages in series. These were inside a gasket-sealed airborne-sound barrier, which floated on three more mass-spring-damper stages in series. This entire structure was contained in a second gasket-sealed airborne-sound barrier. The mass-spring-damper stages were tuned by hand and had corner frequencies in the neighborhood of 1 to 2 Hz. The general arrangement is portrayed in Figure 2 of Lewis et al (2001). That paper also describes (on p. 1188) the extraordinary extremes we were required to pursue in order to keep extraneous fields (electric and magnetic, mostly 60 Hz and its harmonics) from contaminating our stimulus. Eventually, we had a system in which all ambient translational vibration components (one vertical, two horizontal) over the frequency range 10-1,000 Hz (including 60 Hz and its harmonics) were below the noise floor (0.00002 cm/s2) of our measuring system. We needed all of this before we could begin to evaluate the acoustic sensitivity of the bullfrog sacculus.

Hironori Koyama

We also needed a computer to analyze the correspondences between acoustic stimuli and neural responses. What we had was a new S-100 bus system, which had been put together by Kenneth Krieg, another graduate student from MIT (both Ellen and Richard had been undergraduate students there). Hironori wrote the intitial programs we used to analyze the vibratory stimulus-response data.

With the system ready, we began to study saccular neurons and quickly discovered that we were on to something very special. Bullfrog saccular neurons frequently exhibited conspicuous responses to sinusoidal vibratory stimuli with peak amplitudes as low as 0.005 cm/s2. The only animals that had been demonstrated to have vibratory sensitivity comparable to this were snakes (Hartline, 1971) and the American cockroach (Periplaneta americana, Autrum and Schneider, 1948), both of which were reported to have thresholds of 0.02 cm/s2. Our bullfrogs were responding clearly to levels 12-dB below that. Subsequently, we were able to follow the responses of bullfrog saccular axons down another order of magnitude—to a level 32 dB below the snake/cockroach threshold (see Figure 7 of Lewis et al 2001). Autrum had always been a hero of mine, and I had lectured frequently on his work— which brought physics and biology together. It was satisfying to be following in his footsteps. Of course the Lewis Lab still had other inner-ear senses to pursue; and we would do that. But the bullfrog sacculus and the seismic sense remained a source of puzzles and amazement for all of us. We remained the best-equipped lab for these sorts of (eighth-nerve) physiological studies on the frog seismic sense. Richard Lewis and Jim Hudspeth were about to provide new impetus for those studies, and Peter Narins was about to open a new door for us-- into the world of behavioral research in the field. Although there was subsequent, derivative work on seismic sensitivity in other frog species, our 1982 paper (Koyama et al) and our subsequent work on the exquisite seismic sensitivity of the bullfrog stand as the pioneering landmarks on this aspect of the physiology of the frog seismic sense.

After we had presented our dye-tracing studies at several national and international meetings, it became clear that we had been in competition with a group at Harvard (attempting to use HRP to map cochelar axons) and a group at UCLA (attempting to use HRP to map vestibular axons). We were the first to succeed, but fortunate to be so. Our studies showed that acoustic sensing is distributed over the entire bullfrog sacculus, as we expected; but they also showed many other things, and they gave us several firsts. We were the first, for example, to show definitively that the bullfrog amphibian papilla has a tonotopic organization-- analogous to that of the mammalian cochlea. We must acknowledge the crucial participation of Bob Capranica and Andrea Megela Simmons in that phase of the work. Vestibular responses can be divided into those with low-pass filter properties (tonic responses), those with band-pass properties (phasic responses), and those with some of both (phasic-tonic responses). In mammals, Fernandez and Goldberg had inferred that phasic response and tonic response each arose from different types of hair cells. With dye tracing, Richard obtained the first definitive corroboration of such inferences in any vertebrate (a frog in this case, not a mammal). After completing his dissertation, he would join Fernandez and Goldberg as a post-doc and accomplish the same thing in a mammal. He found that in the bullfrog, phasic responses arose from cells with one Lewis & Li hair-bundle type, and that it was the same type in both the utriculus and the lagena; phasic-tonic responses arose from cells with a different Lewis & Li hair-bundle type, and that type was the same in utriculus and lagena; and purely-tonic responses arose from cells with a third type of Lewis & Li hair bundle, and that type also was the same in utriculus and lagena. Richard also found that, as Cheuk Li and I had surmised, seismic (acoustic) response from the lagena arose from cells with a fourth hair-bundle type, in that case bundles with Dean Hillman's bulbs. The presence of bulbs did indeed imply acoutic sensitivity-- more grist for the reverse-engineering mill. With some considerable effort, we added another first.... we were the first to trace functionally-identified primary afferent neurons from their peripheral origins in a vertebrate inner ear to their central terminations in a vertebrate brainstem.

In an abstract prepared in April, 1980, for the Society for Neuroscience meeting the following November, we reported that the projections of vibratory afferents occupy the lateral side of the bullfrog's dorsal medullary lobe, just ventral to the projections from the amphibian papilla and just dorsal to the vestibular projections from the lagena. This first-level vibratory area in the bullfrog brainstem extends caudally approximately 2 mm from the level of the VIIIth nerve.


Bob Capranica and Lewis-Lab philosophy regarding the physiology of the ear

By now we were thoroughly involved in eighth-cranial-nerve physiology-- the study of the ear by observation of spike trains in single afferent axons of the auditory/vestilbular nerve. Each of those axons carries part of a dynamic sensory image from the ear to the brainstem. And each of those axons is connected synaptically to one or more hair cells. And each hair cell is connected, through it's hair bundle, to part of the mechanical milieu of the ear. Collectively, the axon, the hair cells it innervates, and the mechanical structures and microstructures linked to those hair cells are lumped by eighth-nerve physiologists and called a unit. To such a physiologist, the response of the unit to an acoustic or vestibular stimulus is represented by the axon's spikes-- the neural message that the physiologist is intercepting on its way to the brainstem.

The Lewis-Lab had an overriding physiological philosophy-- perhaps reflecting a bit of the skepticism of David Hume. It is described well, I believe, in our final report to the NIH after 26 years of core support by a single, continuing grant. Final report to NIH

Without the pioneering work of Robert R. Capranica, and without Bob's direct help, the Lewis Lab very likely would not have undertaken physiological studies of hearing, studies that occupied most of our time for more than two decades.
Very brief summary of Bob's contributions to neuroethology and hearing science


Peter Narins

In the winter and spring of 1981 I had taken sabbatical leave in residence in order to accelerate the experiments that Ellen and I were doing (on the saccule and the auditory organs). Peter Narins, newly arrived as an assistant professor at UCLA, called me in early spring to see if I would help him with SEM of some leptpdactylid frogs from Puerto Rico. Having carried out an extensive comparative study of frog auditory papillae, and being happy to add to my collection, I agreed. He arrived with the animals, from which I extracted and prepared the tissue and took the scanning-electron-micrographs. While he was in the lab, Peter noticed our vibration isolation setup (which was pretty conspicuous) and the work we were doing on it. He told me that in the field, in Puerto Rico, he had noticed that the white-lipped frog seemed especially sensitive to substrate vibrations. I suggested he bring a few animals to Berkeley for testing. He did.

By spring, 1981, our physiological procedures for vibratory studies had become straight-forward. Our success rate in penetrating and recording from saccular neurons in an animal subject was nearly 100%; our success rate for dye tracing in the sacculus was between 30 and 50%. With the white-lipped frog, there would be no dye-tracing. Every animal yielded several vibration-sensitive neurons that we were able to penetrate and record over prolonged periods. Peter did the surgery, I carried out the experimental protocol on the Lewis-Lab facility, and we used Hironori’s programs to analyze the data. What we found was even greater sensitivity than we had seen in the bullfrog, The experiments took only a few days in the laboratory. Although Peter and I continued to collaborate on field research for many years, that short effort would be the full extent of our collaboration in laboratory research.

White-lipped frog

Peter invited me to join him for 18 days in Puerto Rico in May, 1981. He was to be in charge of a field biology course there for UCLA students. While there, I intended to monitor the seismic environment of the white-lipped frog, and to see if I could estimate its behavioral thresholds for seismic stimuli. Peter had noticed that even very light, remote footfalls would silence calling males. What was the vibration amplitude required to do that? The equipment I took along was ill-suited to the environment and the task, so all I managed to accomplish was to gain familiarity with the Puerto-Rican rain forest and with the white-lipped frog and its environs. Two years later, when Peter was in charge of the course again, I returned with him for another 18 days in Puerto Rico. This time, with sensitive geophones and weatherproof, low-noise preamplifiers that I had built for them, along with a stereophonic cassette recorder and long, weather-resistant leads from the Lewis-Lab and a tripod-mounted microphone provided by Peter, I was ready for the measurements. I never made them. I spent the first two nights in the field finding white-lipped frogs in suitable locations and testing their tolerance to having geophones and a microphone placed close to them. From my experience in 1981, I knew I should find isolated male frogs. Interacting males were relatively insensitive to footfalls or other vibrations. With isolated males, on the other hand, no matter how lightly I attempted to tread, my footfalls always silenced them. After I placed the equipment, however, they would begin calling again in about fifteen or twenty minutes. On the third night, I was ready to begin the measurements, so Peter broke away from his instructional duties and joined me at the study site. Close to the first isolated male frog I placed the microphone to record its calls and a set of geophones to record the amplitude of the vibratory stimulus required to silence him. The long leads from these devices were connected to the cassette recorder and to a set of headphones. Peter was wearing the headphones when the frog began to call again. He immediately noticed that each call picked up by the microphone was accompanied by a strong vibratory signal in the geophone-- every time it called, the frog produced a thump on the ground. Those thumps, and their implications, were to become an obsession with the Lewis-Lab for the next sixteen years.

Was the white-lipped frog communicating with seismic signals? Up to that time, that mode of communication had not been reported for any terrestrial vertebrate. For the remainder of our 18 days in Puerto Rico that year, Peter and I looked for thumping by other white-lipped frogs. We found that approximately half of the calling males produced them. We also noted the spectral distribution of power in the resulting seismic waves matched the spectral sensitivity of the frog’s sacculus. Furthermore, the waves had sufficient amplitude to be sensed by the frog’s saccule— up to several meters from their source—well beyond the typical spacing when these frogs are calling in a group. This circumstantial evidence suggests that the white-lipped frog uses the seismic channel for intraspecific communication; and Peter and I presented the evidence and the hypothesis in a 1985 research report in Science. Over the next fifteen years, our attempts to confirm this hypothesis were met with results that were inconsistent not only from one frog to the next, but also from one experiment to the next with individual frogs. There would be nothing more to add until we could make sense of those results.



Kathy Cortopassi and Mike Chin

Toshio Michael Chin and Kathryn Cortopassi joined the Lewis Lab as undergraduates and continued for graduate work— Mike for a masters degree, Kathy for a doctorate. While they were still undergraduates, they took over much of the responsibility for developing signal-processing software for our S-100 systems. A major goal of the Lewis Lab now was to use the linear components of stimulus-response properties of acoustic and vestibular axons to construct inferences about underlying mechanisms. Among electrical engineers, this is a standard approach to reverse engineering. We would begin by doing it in the frequency domain. This would prove to have serious limitations, especially with our seismic axons. A visit to the University of Keele laboratory of EF Evans in 1988 convinced me to take another approach, based on time-domain analysis. At that point, the Lewis-Lab took a dramatic shift in direction, but that is another story. In the early 1980s, we used sinusoidal stimuli, and to generate them we used our new B&K sine-wave generator. It provided slow, continuous frequency sweeps, which were our principal stimuli. Mike and Kathy developed software to track the frequency-dependence of peak amplitude and phase shift of acoustic-axon responses to these stimuli. The resulting Bode diagrams would be sources of inference about underlying mechanisms. In the summer of 1983, Mike completed his MS degree on this subject and left Kathy as our signal-processing software expert.

It would be difficult for me to overstate the impact of Mike's and Kathy's contributions. With Kathy’s help over the next few years, I would use their software to analyze the signal-processing properties of the bullfrog saccule and amphibian papilla. It was clear from the amplitude and phase parts of the Bode diagrams that in each of those sensors we were dealing with an array of very complex peripheral filters, each with high dynamic order. These filters, designed by nature through natural selection, were comparable to the best that a master electrical engineer could design for the same task— which we took to be dynamic spectral analysis, with high temporal resolution and spectral resolution made high by very steep band edges rather than by narrow pass bands. These filters and their implications became the centerpiece of my personal research for several years, and they were the centerpiece, as well, of Kathy’s doctoral dissertation.

Cycle histograms for a sensitive, spontaneously-active bullfrog saccular unit
Mean spike rate responses and noisiness in the same bullfrog saccular unit
Linear gain for the same unit plotted for six vibrational frequencies
Swept-frequency tuning curve for a bullfrog saccular unit that lacked spontaneous activity
Log-log plot of the same swept-frequency tuning curve
Distribution of best frequencies
Distribution of seismic sensitivity

About this time, Richards S. Lewis, working in Hudspeth's Laboratory, discovered electric resonance in the bullfrog saccular hair cells. In an elegant study, he decomposed the resonance into a pair of ion-channel populations, linked by their respective ion flows (potassium and calcium) and the electrical capacitance of the hair-cell's plasma membrane. Because electric resonance already had been implicated in tuning in auditory hair cells of the red-eared turtle, many researchers assumed they would be doing the same thing in the bullfrog sacculus. There were two immediate problems, however. First, the electric resonant frequency often was well above the tuning range we had found in the bullfrog sacculus. Second, the dynamic order of the resonance was two and that of the saccular hair-cell tuning, as we had observed it, was much higher. Richard Lewis had done a masterful job of reverse engineering the electric resonance. The next step would be to determine what role, if any, the resonance plays in bullfrog saccular tuning. This task was left to the Lewis Lab.

Linear gain vs frequency for another saccular unit

Ned Lewis, Steve Moore, Pam Lopez and Greg Kovacs

When I returned to Berkeley from our June, 1983 Puerto Rico trip, I was anxious to find other frogs doing what the white-lipped frog did—producing thumps in synchrony with their calls. Ned had calibrated and field-tested the lab’s geophones, and had developed a much-improved system for coupling the geophones to the substrate. Now he joined me in my quest to find other thumping frog species. Those early searches produced no new thumping frog species; and subsequent searches (with Peter and with Steve Moore) also produced none.

The most obvious approach to studying putative seismic communication in the white-lipped frog was to begin by studying, as thoroughly as we were able, the animal’s other mode of acoustic communication—vocalization. This would require us to carry out play-back experiments in the field. Our goal would be to identify parameters of the airborne call that were conspicuously affected by interaction with a calling neighbor. We would quantify these, then we would determine whether or not they were altered consistently when thumps accompanied the neighbor’s calls. To help support this research, I submitted a proposal to NSF. It was enthusiastically received and funded. It provided funds for equipment, a graduate-student research assistant, and travel to Puerto Rico. Additional support was provided by the Committees on Research at UCLA and UC Berkeley. Steven Moore, one of the first students admitted to our new, joint bioengineering program with UC San Francisco, joined the Lewis Lab and took on the white-lipped frog project. Steve arrived in early summer, 1984 and immediately joined us on a three-week Puerto Rico field trip. Accompanying Peter from UCLA was Pamela Lopez. The four of us worked as a team, carrying out preliminary experiments and laying the groundwork for what would follow in the summer of 1985.

Pam and Peter with guide

As one would expect of a laboratory in a department of electrical engineering and computer sciences, the Lewis Lab was peopled by imaginative and skillful tinkerers and gadgeteers. Steve was no exception. We were located across the hall from the departmental equipment supply room and electronics parts store, and down the hall from the departmental machine shop. In preparation for our next summer in the field, which was to be approximately ten days in Puerto Rico, followed by eight days in Costa Rica, Steve got busy designing and building the instruments we would need. These included an electronic thumper, which Steve designed around the solenoid driver from an electric typewriter (a subsequent version is depicted in Figure 16, in Lewis et al., 2001). They also included a digital storage device designed by Gregory Kovacs, who was working on a masters degree in the Lewis Lab. With Greg’s device, we could digitize and record a frog call in the field, then trigger precisely-timed, analog replays of that call. Steve constructed the trigger and timing circuits we would use for that purpose. The field equipment that we constructed that year would serve us over the next fifteen years in Puerto Rico, and it would provide the central core for field work we would start soon in Southern Africa.

Pam and Steve

Our Puerto-Rico Field research in the summer of 1985 was devoted primarily to learning as much as we could about the vocal interactions between male white-lipped frogs. Peter and Pam focused on the effects of those interactions on spectral and amplitude parameters of the frog’s call. Steve and I focused on the effects on call timing. All of us had assumed that the thumps of the white-lipped frog were produced by the gular pouch striking the ground during each call. Pam's sharp eyes found the first confirming evidence of this. She directed our attention to a white-lipped frog calling in a shallow pool. Each call was accompanied by a train of waves on the surface of the water, and those waves were radiating outward from the gular pouch. After ten days in the Puerto-Rican rain forest, Steve and I flew to Costa Rica to carry out similar experiments on a close relative of the white-lipped frog. In the summer of 1986, we returned for a very brief visit to Puerto Rico to complete some final details on this phase of our field research.

We had found that, in response to playback calls, the male white-lipped frog adjusted most of the parameters of its own call-- including spectrum, amplitude and timing. Most of these adjustments were subtle, but one was conspicuous and ubiquitous-- at a fixed interval after the onset of a playback call, there was a time gap during which the responding frog never would initiate its own call. On our last visit to Puerto Rico, in 1996, we would discover that the nature of this gap was more complex than we had believed, and that it seemed to divide the male white-lipped frog population into two nearly equal subpopulations (Lewis et al., 2001, Fig. 13). In the meantime, we used the gap as our measure of responsiveness to seismic components in our stimuli. Over those ten years, corroboration of seismic communication in the white-lipped frog became a Lewis-Lab project. Each spring we would plan experiments for the following summer, and we would build and assemble the required equipment. Then we would travel to Puerto Rico for one or two weeks, carry out the experiments in the field, and analyze the data with a computer that we had taken with us. Peter occasionally would be able to join us, and on one occasion Jacob Christensen-Dalsgaard accompanied him.


Eva Hecht Poinar and Xiaolong Yu

Eva Poinar came to the lab as research microscopist. By the time she retired, in 2001, she was expert at computer editing, organizing and managing large international meetings, chemical photographic processing, digital signal processing, and any number of other things. One of her first undertakings in the Lewis Lab was production of the first round of prints of the shell photographs on this website. I digitized the best of those prints and processed them further with Photoshop to produce the website images. She joined Steve Moore, Kathy Cortopassi and me on the ground in Puerto Rico, and she contributed to the data analysis that followed each field trip. Experienced in plant tissue studies, she was uncomfortable preparing our frog inner-ear tissues. I continued to do that; she prepared and maintained the various solutions and took over much of the microscopic analysis of inner-ear tissues.

I met Xiaolong Yu in Nanjing in September, 1985. In the summer of 1986 he joined the Lewis Lab as a doctoral student and immediately took upon himself the tasks of re-engineering some of our most sensitive lab-built equipment-- including the first stages in our electrode drive systems, and a microrotational stimulus system that was being used by a postdoctoral student, Steve Myers. We quickly learned how valuable our new Chinese colleague was.

The linear responsiveness of spontaneously active saccular units, contrasted with the nonlinear responsiveness of silent units, reminded me of work by Richard Stein and Andrew French in the late 1960s and early 1970s, and of some even earlier thinking by Otto Lowenstein. In spontaneously active saccular units the spontaneously generated spikes were distributed randomly in time-- much like the gamma distributions that my friends from long ago-- George Gerstein and Don Perkel had described. It seemed clear that the underlying noisiness was providing a dithering effect for our stimulus signal. When its amplitude was sufficiently low, that signal would produce linear modulation of the short-term mean rate of random spike production-- linear modulation of the instantaneous spike rate. Without the underlying noise, there would be no background spike rate to modulate. In that case, the stimulus signal would produce no response at all until its ampluitude was sufficiently large to carry its peaks over the spike threshold. The response would be conspicuously nonlinear. In the late 1960s and early 1970s, Andrew French and Richard Stein had demonstrated dithering with basic spike-trigger models. Their model's spike output would be random, but its short-term mean spike rate would follow the analog input-- even down to levels that would have been below threshold in the absence of the noise. Dithering now is well-known to engineers in the field of digital signal processing. It also is the key ingredient in a notion that became popular in the 1990s-- stochastic resonance. If one combines the dithering concept, which is well understood, with the concept of synchrony suppression, which was presented to the hearing research community in the late 1970s and early 1980s (by Eric Javel and Donald Greenwood), then one has all of the ingredients required for stochastic resonance. For any level of stimulus signal, there will be a non-zero level of dithering noise that yields the maximum signal-to-noise ratio in the spike-rate response.

In our spontaneously active bullfrog saccular units, the amplitude of the dithering noise was not directly under our control. As we increased the stimulus amplitude, the near linearity that we observed at low levels gave way to a stereotypic pattern of increasing nonlinearity. I wondered if that pattern was inherent in real spike trigger dynamics-- with its accommodative and refractory aspects. I asked Xiaolong to investigate this possibility by extending the French-Stein studies to a more realistic spike-trigger model-- the Hodgkin-Huxley model. His results are published in a 1989 paper in a special issue of the IEEE Transactions on Biomedical Engineering. The pattern of nonlinearity development was reproduced almost precisely, both for frog acoustic units and for frog vestibular units. In this paper, Xiaolong presents a rigorous explanation for dithering and a detailed analysis of the relationship between dithering effectiveness and the amplitude of physiological noise at the spike trigger region.

The notion of stochastic resonance is reputed to have arisen in 1981 in the field of climatology. In the 1990s it re-emerged in neurobiology and in hearing research. For the most part, this work focussed on the dithering part of the picture. My conclusion, as I followed it, was that the neurobiologists and hearing researchers doing this work had not heard of Dick Stein or Andrew French or Eric Javel or Donald Greenwood, and that they did not read IEEE transactions. The Lewis-Lab 1989 paper contained pretty compelling evidence of dithering in the bullfrog ear.

For many years I had found the studies of adaptation in hearing research troubling. It seemed clear that, as it does in vision, peripheral adaptation in auditory sensors involves memory nonlinearity. As I explained to one senior colleague in the field, such a nonlinearity need not produce conspicuous distortion in a auditory unit's response. It could be very much like the light bulb in the oscillator that made Hewlitt and Packard wealthy. What we should be looking for are changes in the gain of the system that are slow in comparison with the period of a stimulus waveform. Studies in the hearing-research literature were confounded by the fact that the test signal causing the gain change and the test signal probing the sensor's gain were one and the same.

In vision science, investigators had gotten around this problem by using steady (dc) light as the stimulus for adaptation and a weaker, fluctuating (ac) light as a probe to determine the gain. The test signals and their responses were orthogonal and thus could be analyzed separately. Instead of dc, Xiaolong used acoustic noise as the orthoganal input. For the probe he used the usual acoustic sinusoid. The probe was applied continuously as the amplitude of the noise was stepped up and down-- leading to alternating levels of adaptation, through which the linear response to the probe could be followed. Ken Henry and I used the same trick in the gerbil ear. As it did in the frog, stepwise increases in the noise amplitude produced near-instantaneous increases in the response to the probe (over the first few probe-stimulus cycles after the step). This was attributable to dithering. It was followed by a gradual decline in the amplitude of the probe response-- attributable to adaptation. Researchers attempting to study "stochastic resonance" in hearing would need to separate these opposing responses.

Among the bullfrog acoustic sensors, adaptation of this sort seemed to be limited largely to the amphibian papilla. Very little of it was found in the sacculus. Furthermore, the mechanisms of adaptation in bullfrog and gerbil seem to be very different-- but that is a tale for a later time.


Egbert deBoer, EF Evans and Pim van Dijk

Although I have known Egbert deBoer for many years, I believe he visited the Lewis Lab only once. He was on the Berkeley Campus for five days in June of 1996-- attending the sixth international conference on the mechanics of hearing (the title of the conference was Diversity in Auditory Mechanics). As one might imagine, that week was too hectic for a lab visit. Egbert's visit came several years later. I write here not about that visit or that conference, but about one of Egbert's major contributions to hearing research-- the tool we know as reverse correlation (or REVCOR). In my opinion, the power and significance of this tool are highly underated by most hearing researchers. This was not true of Ted Evans, however, and it was a visit to his lab that convinced me that REVCOR would be the only way I could reliably and efficiently evaluate the near-linear tuning of bullfrog saccular units. I had been attempting to compare the near-linear saccular tuning curves (Bode diagrams) for vibratory inputs to those for auditory inputs. I suspected that the sacculus might be an inertial motion sensor for vibration, but not for airborne sound. This might show up in the tuning-curve differences. Trying to do this one frequency at a time, at very low stimulus levels was tedious and inefficient. The visit to the Evans Lab took place during the third international conference on the mechanics of hearing, at Keele University in 1988.

When I returned from the conference, I asked Xialong and David Feld (a new graduate student who would earn a masters degree in the Lewis Lab) to construct electronic circuits that would allow us to carry out the REVCOR procedure directly with live or recorded stimulus and spike waveforms. We would use these circuits with auditory units and vibratory units in the bullfrog ear. At that time, Xiaolong was taking over the comparison of auditory and vibratory responses in the bullfrog sacculus and amphibian papilla. The circuits were completed promptly and immediately were put to heavy use-- analyzing bullfrog and red-eared turtle data from the Lewis Lab and Gerbil data from the Henry Lab. As we had speculated it would, the analysis showed conspicuous differences between the near-linear sacculus tuning curves for vibratory and auditory stimuli, strongly suggesting different pathways for the stimulus signal.

Knowing that we were using REVCOR analysis on bullfrog auditory and seismic units, Pim van Dijk, then a doctoral student at Groningen, spent several weeks at the Lewis Lab carrying out measurements of tuning curves at different temperatures. He concentrated on amphibian-papillar units, but included a few saccular units in his study. In the middle of that visit, a call to Peter Narins revealed that he had a postdoc doing a very similar project. Pim agreed to hold off on publishing his results until the Narins lab was ready to publish theirs. Both papers appeared in 1989. The result of Pim's work that was most important to me was his observation that when the tuning curves shifted with changing temperature, the phase and the amplitude curves (in the Bode diagrams) shifted together. When I had used the phase curves as evidence of high dynamic order, some colleagues had argued that they reflected pure time delays that were separate from the other aspects of dynamics. Pim's results refuted that argument.


David Egert

David joined the Lewis Lab during the 1988-89 academic year and took on the bullfrog sacculus as the subject for his dissertation research. One of his major goals was to reconcile the hair-cell electrical resonances observed in the Hudspeth Lab with the whole-unit tuning curves obtained in the Lewis Lab. It was generally believed (and subsequently verified in the Narins Lab) that the frequency responses of hair-cell electrical resonances are strongly dependent on temperature. If that were so, then the temperature dependence of bullfrog saccular tuning curves should bear strong implications regarding the role of the electrical resonances in that tuning. When Pim left, David took over his setup and began some critical experiments on the sacculus. The lack of temperature dependence that he found in saccular units, along with the strong temperature dependence of electrical resonance in bullfrog AP hair cells (observed by Mike Smotherman in the Narins Lab) cast serious doubt on any role at all for those resonances or their elements in saccular tuning. David's result were published in a Hearing-Research paper in 1995.

THERE WILL BE MORE TO FOLLOW.



A seismic unit from the bullfrog lagena

First-order Wiener kernel from a bullfrog lagena seismic unit
Second-order Wiener kernel for the same unit
Excitatory subkernel, from decomposition of second-order Wiener kernel.
Inhibitory subkernel, from decomposition of second-order Wiener kernel.
Spectrotemporal receptive field, derived from second-order Wiener kernel.

Seismic sensitivity in an auditory unit from the bullfrog amphibian papilla

Second-order Wiener kernel
Excitatory and inhibitory subkernels
Spectrotemporal receptive field

Last updated 12/12/07