That music playing in your head: a real conundrum for scientists
Researchers at EPFL can now see what happens in our brains when we hear music in our heads. The researchers hope that in time their findings will be used to help people who have lost the ability to speak.
When we listen to music, different parts of our brain process different information – such as high and low frequencies – so that our auditory perception of the sounds matches what we hear. It’s easy to study the brain activity of someone who is listening to a song, for instance, as we have the technology to record and analyze the neural responses that each sound produces as it is heard. It’s much more complicated, however, to try and understand what happens in our brain when we hear music in our heads without any auditory stimulation. As with analyzing real music, the brain’s responses have to be linked to a given sound. But when the music is in our heads, that sound doesn’t actually exist – or at least our ears don’t hear it. Using a novel approach, researchers with EPFL’s Defitech Foundation Chair in Human-Machine Interface (CNBI) were able to analyze what happens in our brains when we hum in our heads.
Recording an imaginary sound
EPFL researchers, in cooperation with a team from the University of California, Berkeley, worked with an epileptic patient who is also an experienced pianist. Initially, the patient was asked to play a piece of music on an electric piano with the sound turned on. The music and the corresponding brain activity were recorded. The patient then replayed the same piece, but this time the researchers asked him to imagine hearing the music in his head with the sound on the piano turned off. Once again, the brain activity and the music were recorded. The difference this second time around was that the music came from the mental representation made by the patient – the notes themselves were inaudible. By gathering information in these two different ways, the researchers were able to determine the brain activity produced for each sound, and then compare the data.
A totally new experiment
The experiment may seem simple, but in fact it’s truly one of a kind. “The technique used – electrocorticography – is extremely invasive. It involves implanting electrodes quite deep inside the patient’s brain,” explains Stéphanie Martin, lead author of the study and a doctoral student with the CNBI. “The technique is normally used to treat people with epilepsy who cannot take medication.” That’s why the researchers worked with this patient in particular. The electrodes, in addition to being used for treatment purposes, can measure brain activity with a very high spatial and temporal resolution – a necessity given just how rapid neuron responses are.
Experimental task design. (A) The participant played an electronic piano with the sound of the digital keyboard turned on (perception condition). (B) In the second condition, the participant played the piano with the sound turned off and instead imagined the corresponding music in his mind (imagery condition). In both conditions, the sound output of keyboard was recorded in synchrony with the neural signals.
Possible future language-related applications
This is the first time a study has demonstrated that when we imagine music in our heads, the auditory cortex and other parts of the brain process auditory information, such as high and low frequencies, in the same way as they do when stimulated by real sound. The findings have been published in the journal Cerebral Cortex. The researchers mapped out the parts of the brain covered by the electrodes based on their function in this process and their reactions to both audible and imaginary sounds. The scientists’ aim is to one day apply these findings to language, such as for people who have lost their ability to speak. “We are at the very early stages of this research. Language is a much more complicated system than music: linguistic information is non-universal, which means it is processed by the brain in a number of stages,” explains Martin. “This recording technique is invasive, and the technology needs to be more advanced for us to be able measure brain activity with greater accuracy.” While more research needs to be done, a first step for researchers will be to replicate these results with aphasia patients – people who have lost the ability to speak – and determine whether the sounds they imagine can be recreated. The researchers hope their findings will eventually help such individuals speak again by ‘reading’ their internal speech and reproducing it vocally.
“Neural Encoding of Auditory Features during Music Perception and Imagery.” Stephanie Martin, Christian Mikutta, Matthew K. Leonard, Dylan Hungate, Stefan Koelsch, Shihab Shamma, Edward F. Chang, José del R. Millán, Robert T. Knight, Brian N. Pasley.
This study was carried out in cooperation with the following universities: University of California, Berkeley; University Hospital of Psychiatry, Bern; Inselspital, Bern; University of California, San Francisco; Freie Universität, Berlin; École normale supérieure, Paris; and the University of Maryland in College Park, USA.