Name that tune: Decoding music from the listening brain

R.S. Schaefer, J. Farquhar, M. Sadakata, P. Desain, Y. Blokland

Research output: Contribution to JournalArticleAcademicpeer-review


In the current study we use electroencephalography (EEG) to detect heard music from the brain signal, hypothesizing that the time structure in music makes it especially suitable for decoding perception from EEG signals. While excluding music with vocals, we classified the perception of seven different musical fragments of about three seconds, both individually and cross-participants, using only time domain information (the event-related potential, ERP). The best individual results are 70% correct in a seven-class problem while using single trials, and when using multiple trials we achieve 100% correct after six presentations of the stimulus. When classifying across participants, a maximum rate of 53% was reached, supporting a general representation of each musical fragment over participants. While for some music stimuli the amplitude envelope correlated well with the ERP, this was not true for all stimuli. Aspects of the stimulus that may contribute to the differences between the EEG responses to the pieces of music are discussed.
Original languageEnglish
JournalNew scientist
Issue number2285
Publication statusPublished - 2001
Externally publishedYes


Dive into the research topics of 'Name that tune: Decoding music from the listening brain'. Together they form a unique fingerprint.

Cite this