Over 35 years ago, US biologist Dr Denise Herzing dreamed of being able to communicate with dolphins. Inspired by the work of researchers like Jane Goodall, who would ‘plant’ themselves amid wild animals and observe for years, if not decades, Herzing took that approach with dolphins. Her hope was to finally figure out what dolphins were thinking.
In 1985, Herzing set up dolphin research group The Wild Dolphin Project in the Bahamas, where dolphins are known to gather and the water is consistently clear enough to allow observation.
“I wanted to know what their society was like, what the individuals were like, and how they communicated,” Herzing says. She has gathered data and observations on a group of about 100 Atlantic spotted dolphins that live in the area.
“I’m always amazed at the parallels between humans and dolphins,” Herzing says. She notes that dolphins live in multi-generational groups, they play and fight and have preferred circles of friends. Some female dolphins never give birth and instead act as nurses or aunts to other calves.
“Many of us think [dolphins] have languages because their living situation drives intelligence, and they have to know who’s who,” Herzing says.
There are multiple, interconnected challenges in decoding dolphin communications. But new tools that use enormous computing power and AI are coming online that may help the process take the next step to understanding dolphins.
Until a few years ago, the only hope of decoding dolphin noise and eventually understanding their language was to painstakingly match recordings of dolphin vocalisations with observations of their movements. But even in the best possible circumstances, this is exceedingly difficult.
About 20 per cent of a dolphin’s brain is used for hearing – about 10 times that of a human. Decoding dolphins’ communication therefore requires decoding sounds we may not be aware of and dealing with a level of sophistication that could outstrip our own
Herzing says there are three classes of vocalisations that dolphins use to communicate: whistles, good for long-range communications over a few kilometres; clicks, used for echolocation and hunting; and burst-pulses, which are tightly arranged sets of clicks used in social settings.
“Researchers have studied the [dolphin] whistle ad nauseum,” says Herzing. “We can look at whistles on a spectrogram, just like a sheet of music.”
There are unique, ‘signature’ whistles known to be dolphins’ names, as individual dolphins use them to identify themselves or call another dolphin. “That has all been known since the 1960s. Anyone who studies dolphins has a catalogue of the individuals,” Herzing adds.
Researchers in Hawaii, led by Dr Louis Herman through the 1970s, ’80s and ’90s, discovered that dolphins could appreciate things like word order, knowing the difference between “bring the ball to the surfboard” and “bring the surfboard to the ball”.
Now, thanks to Covid restrictions on large gatherings, captive dolphins that would normally perform shows for visitors are instead being studied more intensively by scientists.
At Ocean Park in Hong Kong, Eszter Matrai, a zoologist and dolphin researcher, is leading a nascent research project to study dolphin behaviour. “When they engage in cooperative play, there is a pattern of communication and vocalisation. But we don’t know what it means,” says Matrai. “They could be talking about anything, but they definitely communicate.”
Matrai cites one study in which researchers taught two dolphins to push buttons simultaneously, then put the two buttons out of sight of each other so they could capture the dolphins communicating “push it now”. But even in such tightly controlled circumstances, the challenge is daunting. Matrai says it may take her as much as a week to analyse just 15 minutes of dolphin footage.
Artificial intelligence, enormous computing power and a lot of data offer some hope of a breakthrough, and Herzing says she’s been waiting for these new tools for about 30 years. “We have two projects in parallel: working on data sets and working on a computer-based two-way talking system. It’s a match made in heaven.”
Herzing and her team originally tried to communicate with their dolphins using a rudimentary underwater keyboard. Working with Herzing and her team since 2013, computer scientists at the Georgia Institute of Technology have been developing CHAT (Cetacean Hearing and Telemetry). This underwater computer can ‘translate’ a set of pre-programmed dolphin whistles that seem to match up with various objects. The device consists of hydrophones and underwater speakers hooked up to a computer and a keypad worn by the diver. The wearer hears a voice that translates the whistle either heard or projected by the CHAT system.
But this system is still just focused on a few names for items, and Herzing says that even after four years of using the system, it was not clear that the dolphins understood what the divers in her team were trying to do. It does not yet correspond to real communication. If anything, it created a new mystery.
Dolphins are incredibly good at mimicking sounds. Herzing and her team found that, while using the CHAT tool, dolphins were mimicking whistles they heard. But, instead of generating normal whistles, the dolphins were using a series of complex clicks to create a whistle noise with the same sound profile as what they were hearing.
Herzing speculates that the dolphins could be trying to “teach” the researchers how they wish to communicate. Or, they may just be having fun. Given the range of personalities Herzing has encountered – some dolphins love working with researchers, others shy away – it’s hard to say. Getting enough time with an individual dolphin can be difficult.
Naming objects and developing a shared vocabulary is just one piece of a very large puzzle to solve. Another big missing piece of that puzzle is figuring out who’s saying what to whom and when. Imagine trying to decipher an unknown human language and work out its structure if you didn’t know who was participating in the dialogue.
Dealing with dolphins communicating underwater is harder still. Human hearing in the water is badly impeded. Sound in water travels four times faster in the water than in the air, and human ears cannot even discern where a sound is coming from.
“With 50 dolphins around, you’re in an acoustic soup,” says Dr Matthias Hoffmann-Kuhnt, a specialist in cetacean behaviour and acoustics at the National University of Singapore. “It’s a cacophony around you, and you don’t know who’s saying what.”
Dolphins can also make and hear noise at a far greater range than humans. Human hearing (measured in hertz) runs from about 20 or 30 hertz to 20kHtz. In dolphins, it is 10 times that range – from 150 hertz to 150kHz. Moreover, about 20 per cent of a dolphin’s brain is used for hearing – about 10 times that of a human. Therefore, understanding dolphins’ communication requires decoding sounds we may not know and dealing with a level of sophistication that could outstrip our own.
Hoffmann-Kuhnt started work in 2006 on a device known as an acoustic source position overlay device (ASPOD), which records noises underwater and identifies where the sounds are coming from and who made them. He initially developed his machine while working with humpback whales, which are much easier to identify acoustically because they are big, there are fewer of them, and they tend to communicate in a much narrower bandwidth of frequency.
Hoffman-Kuhnt and his team were trying to solve the mystery of why humpback whales would only signal with their heads facing down at 25 metres depth. He paired a basic GoPro camera with three hydrophones (underwater microphones) and a computer to help record and calculate audio inputs with video. The idea was to use the separation of the hydrophones to help triangulate where a sound was being recorded from and pair it with the footage on-screen. In theory, this would identify which creatures were making which sounds. It was a basic setup, but it showed promise.
From that first experiment, Hoffman-Kuhnt decided to move onto the more challenging work of dolphin communication. So, a few years ago, he began working with Herzing and her team to refine his device. With some additional support, including funding from The Geographic Society, he has refined the device to such an extent that scientists can now review videos of dolphins, with the makers of sounds identified on screen.
“Now, it’s at the stage where it’s a good system with its own housing,” says Hoffmann-Kuhnt. “Originally, it just sort of hung together. It worked, but it wasn’t something that other people could actually use.”
In May this year, he hopes to leave one of his devices with Herzing for her and her team to use over a season, gathering a much richer set of observational data. The computer on the ASPOD identifies where whistles and clicks are coming from on a screen, marking which dolphins are likely to have voiced the noise. When the data is processed, viewers of the footage can see little red or yellow marks to identify which dolphin a whistle or click is coming from.
There are multiple, interconnected challenges in decoding dolphin communications. But new tools that use enormous computing power and AI are coming online that may help the process
Hoffman-Kuhnt says that Herzing’s data sets over the past 37 years are valuable, but his device can’t work with previous years’ footage. A new data set needs to be built from scratch, this time with dolphins being identified when they ‘speak’.
Hoffman-Kuhnt hopes that his device can help decode dolphins and other cetaceans around the world. For her part, Herzing hopes that the tools she’s developing may one day help researchers studying dolphins in other locations, even in Hong Kong.
Dr Lindsay Porter, who has studied Hong Kong’s local finless porpoises and Chinese white dolphins for over 20 years, has only just started to try pairing visual observations of dolphins with acoustic recordings, thanks to some funding from the Hong Kong government in 2021. Her work has shown that local dolphins and porpoises are far more active than previously thought, especially at night.
But studying cetacean behaviour in Hong Kong is difficult, thanks to the murkiness of the water. It may be that Chinese white dolphins, which must communicate and navigate without seeing, are communicating at higher frequencies than the oceanic dolphins, in a language or dialect all their own. Porter says it may be that cetaceans have developed the equivalent of human accents to help them deal with their local environments.
Communication with dolphins is still in its earliest stages, but the first step towards communication between dolphins and humans may be about to start. Humans may one day have to work out how to say “You’re welcome for all the fish” in dolphin.