에프or centuries, shepherds from the small village of Aas in the French Pyrenees led their sheep and cattle up to mountain pastures for the summer months. To ease the solitude, they would communicate with each other or with the village below in a whistled form of the local Gascon dialect, transmitting and receiving information accurately over distances of up to 10 kilometres.
They “spoke” in simple phrases – “What’s the time?”, “Come and eat,”, “Bring the sheep home” – but each word and syllable was articulated as in speech. Outsiders often mistook the whistling for simple signalling (“I’m over here!”), and the irony, says linguist and bioacoustician Julien Meyer of Grenoble Alpes University in France, is that the world of academia only realised its oversight around the middle of the 20th century, just as the whistled language of Aas was dying on the lips of its last speakers.
주위에 80 whistled languages have been reported around the world to date, of which roughly half have been recorded or studied, and Meyer says there are likely to be others that are either extant but unrecorded or that went extinct before any outsider logged them. As he explained in a recent review, they exist on every inhabited continent, usually where traditional rural lifestyles persist, and in places where the terrain makes long-distance communication both difficult and necessary – high mountains, 예를 들면, or dense forest.
Meyer thinks that those interested in language evolution should pay more attention to whistled languages, since they might provide a glimpse of how our ancestors communicated before they had fully evolved into humans.
Researchers have long debated the origins of human language. One prominent theory, first proposed by Charles Darwin, holds that speech evolved from a musical protolanguage, but there are others – for example, that communication was by gesture before it was vocalised. According to a third, “multimodal” approach, gestural and vocal forms of communication evolved in tandem, having different but complementary functions. Vocalisations might have had a coordinating role in social interactions, 예를 들면, whereas gesture might have been more referential – for pointing out features of the environment.
Those who support the theory of a musical protolanguage tend to argue that, as hominin brains expanded and they gained control of their vocal cords, calls that were once involuntary expressions of emotion were harnessed into song that conveyed meaning and gradually acquired combinatorial power, or syntax (though some argue that syntax preceded meaning). Meyer suggests that there may have been a stage of intentional vocalisation before song: “It’s possible that whistling control developed before control of the vocal cords,”그는 말한다.
He is careful to stress that today’s whistled languages should not be viewed as time capsules of any ancient protolanguage, because they could not have existed before spoken language. In essence, what they do is extract a pared-down version of speech that can be transmitted in a reduced acoustic channel while still preserving the essential information. It’s precisely that parasitic aspect of whistled languages that makes many researchers sceptical about the idea of a whistled protolanguage. As language evolution expert Przemysław Żywiczyński of Nicolaus Copernicus University in Toruń, Poland points out, sign languages have emerged spontaneously in communities without speech – as in the case of Nicaraguan Sign Language, which developed in schools for deaf children in the 1980s – but there are no reports of the spontaneous emergence of a whistled language in such a community.
But Meyer argues that whistled languages illustrate a more fundamental principle: “Whistles are complex enough to transmit the essential aspects of languages, confirming that vocal [cords] are not compulsory for an acoustic use of the language faculty.” So in his view it’s possible that some form of whistled language – even if different from those in use today – might have preceded spoken language. 과, 그는 말한다, several strands of evidence now support that claim.
ㅏ 2018 연구 by Michel Belyk of the Bloorview Research Institute in Toronto, Canada and colleagues showed, 예를 들면, that people are better at imitating simple melodies when they whistle them as opposed to when they sing them. And among nonhuman primates, monkeys and apes have volitional control of their lips and tongues, but monkeys don’t have volitional control of their vocal cords, while some apes have some vocal control.
If Meyer is right, the current diversity of whistled languages – although we may only be seeing a snapshot of it – could reflect how language evolved since that protolanguage.
Human languages are either tonal or non-tonal. In tonal languages, such as Mandarin, a word’s meaning depends on its pitch with respect to the rest of the sentence. The vocal cords generate the pitch, or melody, which the oral articulators – including the lips and tongue – then mould into vowels and consonants, or phonemes. Since whistling doesn’t involve the vocal cords, only the oral articulators, whistlers of a tonal language must therefore choose which to transmit – the melody or the phonemes – and it turns out that they always choose the melody. In non-tonal languages – which include English and most other European languages – they don’t have to make that choice because pitch doesn’t affect meaning, so they only whistle the phonemes. There is also a musical form of whistling in both language types where the whistle follows a song’s lyrics by transposing their pitch. Thus whistled languages today comprise three types: non-tonal, tonal-melodic and musical.
“These different categories result from the complexification that came with increasing control of the vocal cords,” Meyer says. “But they may once have been combined into a single, rather formulaic protolanguage based on whistles.”
In a new paper, Meyer – along with two New York City-based researchers, biophysicist Marcelo Magnasco of Rockefeller University and dolphin expert Diana Reiss of Hunter College, explain why they think human whistled languages could also help us understand whistled communication in other species – notably dolphins.
Dolphins are intelligent, sociable animals that vocalise in a variety of ways, including whistling. 사실로, says behavioural biologist Stephanie King of the University of Bristol, whistles make up a large part of their vocal repertoire and are used particularly in the context of social interaction.
One form of dolphin whistle that has been well studied is the signature whistle, which an individual develops in its first year of life and which seems to encode its identity – like a name. 그러나, says King, there are many other types of dolphin whistle that are much less well-understood. “Whistles are really important and there’s still a lot we don’t know about how they’re used and what information they contain," 그녀는 말한다.
Dolphins produce whistles by a different mechanism from humans, involving a structure in their head called phonic lips. This performs a function similar to the human vocal cords, so you could think of dolphin whistling as being closer to human speech than to human whistling. 그럼에도 불구하고, says Reiss, who has studied dolphin communication for years, parallels between the two could help researchers decode what the dolphins are “saying” when they whistle.
The first step, 그녀는 말한다, is to identify the smallest meaningful unit of sound in a dolphin whistle, and then ask how these units are organised in sequences, if they are combined and recombined, and how any possible combinations function in a social context. “We are not suggesting that dolphin whistles encode a linguistic form of communication but rather that complex information can be encoded in what appear to be simple calls," 그녀는 말한다.
When Meyer analyses sound spectrograms of human whistled sentences, he can pick out individual words and syllables even though these are not separated by silences, and despite the fact they can influence each other depending on their relative positions in a sentence. Reiss says something similar may be true of dolphin whistles, but researchers wouldn’t know it because they parse or segment those whistles according to the silent intervals between them. 다시 말해, just as researchers once overlooked the complexity of human whistled communication, she suggests they might now be making the same error with dolphin whistles.
Reiss recognises, as does King, that testing such ideas in the wild will be challenging. It’s possible to play back recorded dolphin whistles to wild dolphins and see how they respond, but only in the last few years – with the availability of technologies such as drones 과 suction-cup acoustic tags – has it become possible to quantify those responses. “We may never be able to truly understand or decode their intended or perceived messages,” says Reiss. 그럼에도 불구하고, she feels it’s important to hone the theory and the predictions that arise from it, ahead of a time when technology might make testing them easier.
The insights gained from dolphins could, 차례로, inform the debate over the evolution of human language. Reiss and her co-authors argue that, though human language is unique in its complexity, components of it might have arisen in other species in response to similar evolutionary constraints. Songbirds have been popular animal models in the search for such convergences, in part because birds are relatively easy to study under controlled conditions. But along with nonhuman primates, cetaceans – mainly whales and dolphins – represent useful alternative models, King says, and ones that might answer different questions.
It might not be a coincidence, 예를 들면, that in two social animals – humans and dolphins – whistled communication arose in the context of subsistence activities performed collectively over large distances. And that observation might eventually help to explain why our closest primate relatives never developed speech even though their vocal tracts are speech-ready. Could the answer lie in their social organisation?
All human whistled languages are endangered, Meyer reports, and most are likely to disappear within two generations. There are attempts afoot to revive some of them, for example in the Ossau Valley where Aas is located, but these may not succeed in bucking the broader trend. The languages’ vitality depends on that of the traditional rural practices with which they are associated, and those practices are also disappearing, as roads, mobile phone masts and noise pollution penetrate once secluded valleys, and young people move out to the cities.
Then again, whistled languages have come into their own in surprising ways in the past. They have often flourished when there has been a need for secrecy – in Papua New Guinea during the second world war, 예를 들면, when whistlers of the Wam language were recruited to transmit military messages via the radio to evade Japanese surveillance – or when they have proved useful in countering some new threat. With the return of bears to the Pyrenees, and the Covid-19 pandemic pushing people back out of the cities, the whistled language of Aas might just be due for a renaissance.