Download our NEW Mobile App!

Get Healthy!

Using Only 'Brain Recordings' From Patients, Scientists Reconstruct a Pink Floyd Song

Using Only 'Brain Recordings' From Patients, Scientists Reconstruct a Pink Floyd Song

The famous Pink Floyd lyrics emerge from sound that is muddy, yet musical:

"All in all, it was just a brick in the wall."

But this particular recording didn't come from the 1979 album "The Wall," or from a Pink Floyd concert.

Instead, researchers created it from the reconstituted brainwaves of people listening to the song "Another Brick in the Wall, Part 1."

This is the first time researchers have reconstructed a recognizable song solely from brain recordings, according to a new report published Aug. 15 in the journal PLOS Biology.

Ultimately, the research team hopes their findings will lead to more natural speech from brain-machine interfaces that aid communication with people who are "locked in"by paralysis and unable to talk.

"Right now, when we do just words, it's robotic,"said senior researcher Dr. Robert Knight, a professor of psychology and neuroscience with the University of California, Berkeley.

Consider the computer speech associated with one of the world's most famous locked-in patients, Stephen Hawking.

Human speech is made up of words but it also has a musicality to it, Knight said, with people adding different meanings and emotions based on musical concepts like intonation and rhythm.

"Music is universal. It probably existed in cultures before language,"Knight said. "We'd like to fuse that musical extraction signal with the word extraction signal, to make a more human interface."

Electrodes implanted on patients' brains captured the electrical activity of brain regions known to process attributes of music -- tone, rhythm, harmony and words -- as researchers played a three-minute clip from the song.

These recordings were gathered from 29 patients in 2012 and 2013. All of the patients suffered from epilepsy, and surgeons implanted the electrodes to help determine the precise brain region causing their seizures, Knight said.

"While they're in the hospital waiting to have three seizures [to pinpoint the location of the seizures], we can do experiments like these if the patients agree,"Knight explained.

Starting in 2017, the researchers started feeding those recorded brainwaves into a computer programmed to analyze the data.

Eventually, the algorithm became smart enough to decode the brain activity into a reproduction of the Pink Floyd song that the patients heard years earlier.

"This study represents a step forward in the understanding of the neuroanatomy of music perception,"said Dr. Alexander Pantelyat, a movement disorders neurologist, violinist and director of the Johns Hopkins Center for Music and Medicine. Pantelyat was not involved in the research.

"The accuracy of sound detection needs to be improved going forward and it is not clear whether these findings will be directly applicable to decoding the prosodic elements of speech -- tone, inflection, mood,"Pantelyat said.

"However, these early findings do hold promise for improving the quality of signal detection for brain-computer interfaces by targeting the superior temporal gyrus,"Pantelyat added. "This offers hope for patients who have communication challenges due to various neurological diseases such as ALS [amyotrophic lateral sclerosis] or traumatic brain injury."

In fact, the results showed that the auditory regions of the brain might prove a better target in terms of reproducing speech, said lead researcher Ludovic Bellier, a postdoctoral fellow with the Helen Wills Neuroscience Institute at UC Berkeley.

Many earlier efforts at reproducing speech from brain waves have focused on the motor cortex, the part of the brain that generates the movements of mouth and vocal cords used to create the acoustics of speech, Bellier said.

"Right now, the technology is more like a keyboard for the mind," Bellier said in a news release. "You can't read your thoughts from a keyboard. You need to push the buttons. And it makes kind of a robotic voice; for sure there's less of what I call expressive freedom."

Bellier himself has been a musician since childhood, at one point even performing in a heavy metal band.

Using the brain recordings, Bellier and his colleagues were also able to pinpoint new areas of the brain involved in detecting rhythm. In addition, different areas of the auditory region responded to different sounds, such as synthesizer notes versus sustained vocals.

The investigators confirmed that the right side of the brain is more attuned to music than the left side, Knight said.

At this point, technology is not advanced enough for people to be able to reproduce this quality of speech using EEG readings taken from the scalp, Knight said. Electrode implants are required, which means invasive surgery.

"The signal that we're recording is called high-frequency activity, and it's very robust on the cortex, about 10 microvolts,"Knight said. "But there's a 10-fold drop by the time it gets the scalp, which means it's one microvolt, which is in the noise level of just scalp muscle activity."

Better electrodes are also needed to really allow for quality speech reproduction, Knight added. He noted that the electrodes used were 5 millimeters apart, and much better signals can be obtained if they're 1.5 millimeters apart.

"What we really need are higher density grids, because for any machine learning approach it's the amount of data you put in over what time,"Knight said. "We were restricted to 64 data points over 3 minutes. If we had 6,000 over 6 minutes, the song quality would be, I think, incredible."

Knight said his team just got a grant to research patients who have Broca's aphasia, a type of brain disorder that interferes with the ability to speak.

"These patients can't speak, but they can sing,"Knight said. What was learned in this study could help the team better understand why people with these injuries can sing what they can't say.

More information

The Cleveland Clinic has more about locked-in syndrome.

SOURCES: Robert Knight, MD, professor, psychiatry and neuroscience, University of California, Berkeley; Alexander Pantelyat, MD, movement disorders neurologist, violinist and director, Johns Hopkins Center for Music and Medicine, Baltimore; Ludovic Bellier, PhD, postdoctoral fellow, Helen Wills Neuroscience Institute, University of California, Berkeley; PLOS Biology, Aug. 15, 2023

HealthDay
Health News is provided as a service to MedCentric Pharmacy site users by HealthDay. MedCentric Pharmacy nor its employees, agents, or contractors, review, control, or take responsibility for the content of these articles. Please seek medical advice directly from your pharmacist or physician.
Copyright © 2024 HealthDay All Rights Reserved.