The concept of reading thoughts once belonged to the realm of science fiction, but breakthroughs in neural decoding are turning it into reality. This technology, which interprets brain activity into meaningful signals, has already allowed scientists to reconstruct words and images from brainwaves. Now, researchers at the University of California, Berkeley have taken it a step further. In August 2023, they managed to recreate a Pink Floyd song purely from brain activity, a discovery published in PLOS Biology that offers new insight into how the brain processes music.
A team led by Robert Knight and Ludovic Bellier conducted the study on 29 epilepsy patients at Albany Medical Center in New York who were undergoing brain surgery. While the patients were in the operating room, they listened to "Another Brick in the Wall, Part 1" by Pink Floyd. Electrodes placed directly on their brains recorded the electrical signals produced as they processed the song. Using artificial intelligence, Bellier later reconstructed the track based entirely on these neural recordings. The final result was hauntingly distorted yet unmistakably recognizable. "It sounds a bit like they’re speaking underwater, but it’s our first shot at this," Knight told The Guardian.
"It’s a wonderful result. It gives you the ability to decode not only the linguistic content but some of the prosodic content of speech, some of the effect. I think that’s what we’ve begun to crack the code on." — Robert Knight
This study provided fresh insights into how the brain interprets rhythm, melody, and speech. According to the university’s press release, the ability to record and decode brainwaves could help scientists better understand prosody—the elements of speech beyond just words, such as rhythm, stress, and intonation. These features play a crucial role in conveying emotion and meaning. Since the team used intracranial electroencephalography (iEEG) recordings, which collect data directly from the brain’s surface, their findings offer an unprecedented look into the auditory processing centers of the mind.
Representative Image Source: Unsplash | Pawel Czerwinski
The potential applications of this research are profound, especially for individuals who struggle with communication due to stroke, paralysis, or neurological disorders. A technology that translates brain signals into speech or music could offer new ways for these individuals to express themselves. "Music allows us to add semantics, extraction, prosody, emotion, and rhythm to language," Knight explained in an interview with Fortune.
Music was chosen as the focus of the study because of its universal nature. Knight elaborated on this decision, saying, "It preceded language development, I think, and is cross-cultural. If I go to other countries, I don’t know what they’re saying to me in their language, but I can appreciate their music."
"Right now, the technology is more like a keyboard for the mind. You can’t read your thoughts from a keyboard. You need to push the buttons." — Ludovic Bellier
Bellier compared the current state of neural decoding to a keyboard—capable of producing words but still lacking the expressive freedom and nuance of natural speech. However, the study also identified brain regions that process rhythm, such as the sounds of instruments like a guitar. In addition, the findings reinforced existing knowledge about brain hemispheres: the left side is more involved in language, while the right side plays a bigger role in processing music and sound patterns.
Representative Image Source: Pexels | Pixabay
As Bellier put it, "It wasn’t clear it would be the same with musical stimuli. So here, we confirm that that’s not just a speech-specific thing, but that it’s more fundamental to the auditory system and the way it processes both speech and music."
This research opens the door to future technologies that could restore communication abilities for those who have lost their voices due to illness or injury. While the current reconstructions are imperfect, they represent an exciting step toward translating brainwaves into words, melodies, and even full conversations.