How brains distinguish between music and speech

Auditory system uses amplitude modulation to differentiate between music and speech.

Share

Follow us onFollow Tech Explorist on Google News

Music and speech are common sounds, but how do we differentiate between them? An international team of researchers studied this process through experiments. Their findings could help improve therapy programs using music to treat aphasia, a language disorder affecting over 1 in 300 Americans annually, including Wendy Williams and Bruce Willis.

Although music and speech differ in pitch, timbre, and texture, results show that the auditory system uses simple parameters to tell them apart. Slower, steady sounds are perceived as music, while faster, irregular sounds are perceived as speech.

Scientists measure signal rates in Hertz (Hz). Higher Hz means more cycles per second. For example, people walk at 1.5-2 Hz. Stevie Wonder’s “Superstition” beat is about 1.6 Hz, and Anna Karina’s “Roller Girl” is 2 Hz. Speech, however, is typically 4-5 Hz, which is two to three times faster.

Scientists know that a song’s volume, or “amplitude modulation,” is steady at 1-2 Hz, while speech changes more frequently at 4-5 Hz. Despite the commonness of music and speech, a better understanding of how to quickly identify them is needed.

In a PLOS Biology study, Andrew Chang and colleagues explored this by conducting four experiments with over 300 participants. They listened to audio clips of synthesized music- and speech-like noise with different amplitude modulation speeds and regularity to understand how we distinguish between music and speech.

Participants listened to audio clips that only varied in volume and speed. They judged whether these noise-masked clips sounded like music or speech. The pattern of their responses showed how speed and regularity influenced their decisions. Scientists liken this to “seeing faces in the clouds”: if a sound matches our idea of music or speech, even white noise can seem like one or the other.

Understanding how our brains distinguish music from speech could help treat auditory or language disorders like aphasia. Melodic intonation therapy, for example, uses singing to help people with aphasia communicate by tapping into their intact musical abilities.

The results showed that our brains use simple acoustic cues to tell music from speech. Slower, more regular sounds (<2Hz) were perceived as music, while faster, more irregular sounds (~4Hz) were perceived as speech.

Understanding this can help treat auditory or language disorders like aphasia. For example, melodic intonation therapy trains people with aphasia to sing what they want to say, using their musical abilities to bypass damaged speech areas. This knowledge can improve rehabilitation programs.

Journal reference:

  1. Andrew Chang, Xiangbin Teng, et al., The human auditory system uses amplitude modulation to distinguish music from speech. PLOS Biology, DOI: 10.1371/journal.pbio.3002631.

Newsletter

See stories of the future in your inbox each morning.

University

Trending