Interspecies Communications

interspecies logo

 

Beluga Calls as Language

This recording, produced by Russian whale researcher Roman Belikov, is of three beluga whales vocalizing just a few feet from a hydrophone at the Shirshov whale research center on the Russian White Sea. We were astounded, at first listen, to recognize how closely certain of these call bursts resemble the "white noise" of an internet connection. Closer analysis showed some of the inter-click intervals to be as short as 1/500th of a second. And even the calls that we humans "hear" as whistles, are actually bursts of very short duration, frequency-modulated clicks.

animate

Whale Communication: Who Talks & Who Listens

Page 3 of a 4 part essay, From The Interspecies newsletter

  wavelet dolphinThis wavelet graph displays the rather evocative click pattern of a white-tipped dolphin. Wavelets show data differently than spectrograms. Is it just an accident, this graph looks so much like a a net?

Interspecies is deeply involved in the process of how to test cetacean calls for their language potential. Like everything else we do, our communication research is driven by aesthetics and subject, and therefore departs, sometimes radically, from the control-based (objective) techniques of cognitive science.

For just one example, most of the non-human language studies of cognitive science rely on captive animals. We only work with whales out on the ocean. In the past ten years most communication research conducted by whale biologists has also been done out on the ocean with wild animals. Let's face, animals communicate socially. They must be with others of their own kind

To test whale communication among wild, free-swimming animals, we draw rather insistently on realtime interaction. Our whale of choice is the beluga. Locating the basis for this language — for instance, is it time-based like human language or is it frequency-based and/or pulse-based like information streaming across the internet — is an extraordinary challenge. Some believe it is holosonic, which in many respects seems a logical analog to echolocation.

To test any part of the speculation, interspecies would need to produce an Arctic expedition to gather a new technical level of field recordings to document the entire beluga whale sonic range of 5 Hz to 160 kHz. To place this wide-spectrum into perspective, realize that until just the past year, most behavioral research of whale communication relied on audio data limited to the human audible spectrum of 20 Hz to 20 kHz. In other words, acoustic researchers were basing their conclusions by examining only 12% of this whale’s known frequency range.

A new generation of inexpensive digital audio tools does permit independent researchers to record full-spectrum audio and, then, analyze this sound out on the water. Our task includes the construction of this recording system, the production of a field project to make the recordings, and the development of a multi-disciplinary network of technicians, programmers, physicists, and bio-acousticians to test the best hypotheses attempting to unravel whale communication. Ten years ago, Interspecies did briefly assemble a diverse group of whale research professionals including Finnish physicist Rauno Lauhakangas, American digital-audio programmer Mark Fisher, programmer Serge Masse, and Norwegian cognitive scientist Preben Wik, to test various theories of cetacean communication. At some point, we recognized that this was the job of a university or well-heeled research institute. We downsized our sense of what we could actually accomplish. Interspecies current role is to provide to the fast-growing international network of researchers with full-spectrum sound recordings, graphic analysis, and an Internet-based forum for discussion.

Who Has Been Listening?

The late John Lilly devoted his life trying to develop a “halfway” inter-specific language comprised of equal parts of bottlenose dolphin whistles and human words. We now contend that discrete dolphin whistles are not actually words, but something else, perhaps something closer to music. Therefore, attempting to translate dolphin whistles into English, whether halfway or all the way, is like trying to translate a Bach fugue into English. Nonetheless, Lilly remains the foremost pioneer of this endeavor. His research, his speculative writing, and his promotion of cetacean intellect, has galvanized a new generation to get involved.

The Russian beluga specialist, Anton Chernetsky, believes that belugas vocalize in discreet phonemes, of which he has isolated twenty-four. Phonemes comprise the acoustic analogue to the letters of an alphabet, the actual sounds we make to communicate through language. But focusing on phonemes as the building blocks of cetacean communication strongly also implies that whales have evolved a language similar in structure to human language: phonemes that join together into words, and words that join into sentences. The consensus within our own network, is that this idea explains only a small part of the process of cetacean communication.

Searching for a cetacean language that parallels human language, disregards whale anatomy and behavior. Cetacean’s vocalize through their blowholes not their mouths. Some of the beluga whale’s most common signals resemble white noise, because signals produced in exceedingly small increments of time, employ a huge swath of the frequency spectrum. Other beluga calls are clearly organized around rhythmical patterns, rather than sets of differentiated tones or timbres. This emphatically contradicts the concise phonemic content that human’s utilize.

Canadian biologist Peter Beamish argues that communication researchers err to focus all their attention on the signals themselves. Beamish offers a controversial theory known as Rhythm-Based Communication (RBC) that attempts to explain not only communication between cetaceans, but between all non-human species. RBC postulates that the actual sounds any animal produces are arbitrary to what is being communicated. Beamish writes that many species don’t need to produce audible sounds because, in his own words, there is nothing “para” about the so-called paranormal, and telepathy is the norm within nature. What matters most is the rhythm or the timing of animal calls.

Beamish offers an example to explain his theory. If whale A makes a sound, followed by a 15 second lapse, then another sound, followed by a 45 second lapse, it will usually prompt any whale B within earshot to lift its head above the water. The periods of silence are conveying the information that something is happening above the water's surface. RBC might be comprehended as the bio-acoustic analog to morse code, Beamish himself compare it, historically, with Quantum Theory, which he describes as a wildly innovative world view offered to behavioral biologists in an attempt to explain problems in established theory that arise when the problem is confronted utilizing a dated “Newtonian” focus on signals, phonemes, and halfway languages. Like Lilly’s research thirty years earlier, today, Beamish’s approach remains largely discredited by mainstream biologists who apparently lack the imagination (and possibly the courage as well) to test it. We want to test it.

What is the mainstream doing? Cognitive scientist, Louis Herman has been studying communication and intellect among captive bottlenose for twenty five years at Kewalo Basin in Hawaii. He focuses almost entirely on interspecies communication, relying on a subset of Ameslan hand signals used by the deaf. Yet while Herman’s captive dolphins have learned to perform many complex tasks involving the sophisticated abstract concepts of creativity, improvisation and multiple choice, the promise of language ultimately fall shorts because his code fails to recognize the stunningly obvious—that dolphins don’t have hands, and can never participate as peers in the discussion. A more subtle complaint is levied against Herman’s persistence to base his language studies on young captive dolphins which have never had the opportunity to learn the communication skills this species obviously uses among its own kind in the wild. Others complain that Herman's research is basically a front for the US Military to train dolphins for certain tasks of warfare.

Herman’s approach is best understood as a sophisticated training program, based on the awarding of food and affection for answering correctly. Despite the fact that this long term study precludes any real chance of a dialogue between species, the basic form of focusing on interspecies relations to learn about dolphin language has much merit. Such research relies on the anthropocentric argument that research on captive animals is legitimate, as long as the distress it causes the animals increases human knowledge.

Time-based and frequency-based language

Bats and toothed whales use echolocation (or sonar) as their main tool for perceiving a dark or murky environment where vision simply does not work. Pulsed clicks echo off objects, providing its producer with a three-dimensional kinetic image of objects, even granting precise information about a prey's bone structure, which reflects differently than soft tissue. Whales echolocate to perceive their world.

The communication hypothesis we are presently exploring was co-developed by Interspecies Jim Nollman, and bio-acousticians Roman Belikov of the Russian Shirshov Institute and Liz von Muggenthaler of the North Carolina-based Fauna Institute. The basic idea is that the common “creaky door” sound produced by several dolphin and whale species is an adaptation of echolocation, which has evolved into a vehicle for social communication. The resultant language is not time-based, with information transmitted by the varying of phonemes over a period of time. Here's a quick example of time-based language. It would take about a minute for you to recite the last two paragraphs. Contrarily, If human language was, instead, frequency-based, imagine that a single 1/100 of second burst of extremely wide-spectrum clicks might carry the same information contained in this entire essay.

The three researchers postulated that certain species of cetacean retain echoed “pictures” in memory, which they vocalize to other whales. It can not be overstated that these echoes are not precisely “pictures”, but something unique, closer to holograms displaying three-dimensional, kinetic information. Because the original echoes are inconceivably dense with information, and because the same echolocation “images” are environmentally altered by factors like current, tide, and spatial distance, the actual communication probably does not consist of simply mimicking the original sonar. One plausible idea is that many toothed whale species encode their social communication, not within phonemes, but within the overlapping pulses generated by harmonic interference patterns.

The human ear hears these pulses as timbre, the unique quality that clearly distinguishes one human voice from another, even when both speakers are pronouncing the same word. But the cetacean ear not only distinguishes timbre, it also can discriminate exceedingly fast pulses that sound like a single continuous tone to the human ear. They perceive a frequency spectrum at least ten times wider than humans can hear. Join these perceptual capabilities together, and it seems likely that cetaceans are able to distinguish individual beats and rhythms within the complex polyrhythmic patterns generated by the thousands (or tens of thousands) of interference beats that resolve as vocal timbre. What does this mean? Imagine a click that takes a whale no more than 100th of a second to produce, containing information analogous to an hour's worth of human conversation. Or think of it as a feature film. The hypothesis seems viable because we already know that a large percentage of the social communication between sperm whales and between beluga whales is based on clicks. In fact, the beluga recordings Interspecies.com has produced in Russia sometimes sound like radio static. When this static is slowed down electronically, it resolves as precise pulses utilizing a vast harmonic spectrum. Clearly, this sound is produced in a social context

The Spanish whale researcher Michel Andre studies sperm whale vocalizations in the Canary Islands. In a recent paper, he offers documentation that the individual whales in his study group usually initiate a communication by vocalizing a highly-syncopated variation of its usual social clicks. Andre has demonstrated that these introductory phrases are the sperm whale’s equivalent of the signature whistles other cetaceans use to identify themselves as the speaker of the moment. One of Andre’s assistants is an African drum master who is the only person on their team who can identify, by ear alone, individual whales by their unique rhythm patterns. By a bizarre process of convergent evolution, the sperm whales signature whistles resemble the patterns developed in West African drum communication.

How To Communicate

Demonstrating communication among wild animals involves several difficult problems that must be solved almost simultaneously. First we must be able to identify the participants in a dialogue as animal A and Animal B. Then we must decipher what Animal A is articulating to Animal B. And third, we need to show that Animal B made the anticipated response to the information communicated by Animal A. These problems are exacerbated by the fact that our study animals live 95% of their lives underwater.

Another problem involves identifying individuals by their voices. Biologists have solved this last one by setting up expensive hydrophone arrays that permit triangulation on a sound source. If the whales they are studying happen to be orcas from the visually-identifiable Puget Sound population, it’s occasionally possible to match a specific call with a specific surfacing animal. Using that same triangulation with all-white, dorsal finless, beluga whales that can not be so easily identified by obvious visual characteristics remains impossible. To solve the problem, we propose the development of “voice printing” technology for whales. This operates on the common technology that now allows a person to speak a command to open a program on a computer although the same command spoken by anybody else elicits no response. It’s a trivial miracle, handled by the timbre-graphing mathematics known as Fast Fourier Transform.

Click here to read page 4 of this essay.

Home

Page One

Page Two

Page Four