Inner Speech Classification using EEG Signals: A Deep Learning Approach

Bram van den Berg, Sander van Donkelaar, Maryam Alimardani

Research output: Chapter in Book / Report / Conference proceedingConference contributionAcademicpeer-review

Abstract

Brain computer interfaces (BCIs) provide a direct communication pathway between humans and computers. There are three major BCI paradigms that are commonly employed: motor-imagery (MI), event-related potential (ERP), and steady-state visually evoked potential (SSVEP). In our study, we sought to expand this by focusing on 'Inner Speech' paradigm using EEG signals. Inner Speech refers to the internalized process of imagining one's own 'voice'. Using a 2D Convolutional Neural Network (CNN) based on the EEGNet architecture, we classified the EEG signals from eight subjects when they internally thought about four different words. Our results showed an average accuracy of 29.7% for word recognition, which is slightly above chance. We discuss the limitations and provide suggestions for future research.
Original languageEnglish
Title of host publicationProceedings of the 2021 IEEE International Conference on Human-Machine Systems, ICHMS 2021
EditorsA. Nurnberger, G. Fortino, A. Guerrieri, D. Kaber, D. Mendonca, M. Schilling, Z. Yu
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781665401708
DOIs
Publication statusPublished - 8 Sept 2021
Externally publishedYes
Event2021 IEEE International Conference on Human-Machine Systems, ICHMS 2021 - Magdeburg, Germany
Duration: 8 Sept 202110 Sept 2021

Conference

Conference2021 IEEE International Conference on Human-Machine Systems, ICHMS 2021
Country/TerritoryGermany
CityMagdeburg
Period8/09/2110/09/21

Fingerprint

Dive into the research topics of 'Inner Speech Classification using EEG Signals: A Deep Learning Approach'. Together they form a unique fingerprint.

Cite this