Author | Lindig-León, Cecilia | dc.contributor.author |
Author | Rimbert, Sébastien | dc.contributor.author |
Author | Bougrain, Laurent | dc.contributor.author |
Date of accession | 2021-06-10T10:02:04Z | dc.date.accessioned |
Available in OPARU since | 2021-06-10T10:02:04Z | dc.date.available |
Date of first publication | 2020-11-19 | dc.date.issued |
Abstract | Motor imagery (MI) allows the design of self-paced brain–computer interfaces (BCIs), which can potentially afford an intuitive and continuous interaction. However, the implementation of non-invasive MI-based BCIs with more than three commands is still a difficult task. First, the number of MIs for decoding different actions is limited by the constraint of maintaining an adequate spacing among the corresponding sources, since the electroencephalography (EEG) activity from near regions may add up. Second, EEG generates a rather noisy image of brain activity, which results in a poor classification performance. Here, we propose a solution to address the limitation of identifiable motor activities by using combined MIs (i.e., MIs involving 2 or more body parts at the same time). And we propose two new multilabel uses of the Common Spatial Pattern (CSP) algorithm to optimize the signal-to-noise ratio, namely MC2CMI and MC2SMI approaches. We recorded EEG signals from seven healthy subjects during an 8-class EEG experiment including the rest condition and all possible combinations using the left hand, right hand, and feet. The proposed multilabel approaches convert the original 8-class problem into a set of three binary problems to facilitate the use of the CSP algorithm. In the case of the MC2CMI method, each binary problem groups together in one class all the MIs engaging one of the three selected body parts, while the rest of MIs that do not engage the same body part are grouped together in the second class. In this way, for each binary problem, the CSP algorithm produces features to determine if the specific body part is engaged in the task or not. Finally, three sets of features are merged together to predict the user intention by applying an 8-class linear discriminant analysis. The MC2SMI method is quite similar, the only difference is that any of the combined MIs is considered during the training phase, which drastically accelerates the calibration time. For all subjects, both the MC2CMI and the MC2SMI approaches reached a higher accuracy than the classic pair-wise (PW) and one-vs.-all (OVA) methods. Our results show that, when brain activity is properly modulated, multilabel approaches represent a very interesting solution to increase the number of commands, and thus to provide a better interaction. | dc.description.abstract |
Language | en | dc.language.iso |
Publisher | Universität Ulm | dc.publisher |
License | CC BY 4.0 International | dc.rights |
Link to license text | https://creativecommons.org/licenses/by/4.0/ | dc.rights.uri |
Keyword | combined motor imageries | dc.subject |
Keyword | multilabel classification | dc.subject |
Keyword | common spatial pattern (CSP) | dc.subject |
Dewey Decimal Group | DDC 000 / Computer science, information & general works | dc.subject.ddc |
Dewey Decimal Group | DDC 150 / Psychology | dc.subject.ddc |
LCSH | Brain-computer interfaces | dc.subject.lcsh |
Title | Multiclass classification based on combined motor imageries | dc.title |
Resource type | Wissenschaftlicher Artikel | dc.type |
SWORD Date | 2020-12-09T19:31:04Z | dc.date.updated |
Version | publishedVersion | dc.description.version |
DOI | http://dx.doi.org/10.18725/OPARU-37997 | dc.identifier.doi |
URN | http://nbn-resolving.de/urn:nbn:de:bsz:289-oparu-38059-2 | dc.identifier.urn |
GND | Elektroencephalogramm | dc.subject.gnd |
Faculty | Fakultät für Ingenieurwissenschaften, Informatik und Psychologie | uulm.affiliationGeneral |
Institution | Interdisziplinäres Neurowissenschaftliches Forschungszentrum der Universität Ulm (NCU) | uulm.affiliationSpecific |
Peer review | ja | uulm.peerReview |
DCMI Type | Collection | uulm.typeDCMI |
Category | Publikationen | uulm.category |
In cooperation with | Université de Lorraine, France | uulm.cooperation |
DOI of original publication | 10.3389/fnins.2020.559858 | dc.relation1.doi |
Source - Title of source | Frontiers in Neuroscience | source.title |
Source - Place of publication | Frontiers Media | source.publisher |
Source - Volume | 14 | source.volume |
Source - Year | 2020 | source.year |
Source - Article number | 559858 | source.articleNumber |
Source - ISSN | 1662-4548 | source.identifier.issn |
Source - eISSN | 1662-453X | source.identifier.eissn |
Open Access | DOAJ Gold | uulm.OA |
WoS | 000595120300001 | uulm.identifier.wos |
Bibliography | uulm | uulm.bibliographie |
Is Supplemented By | https://www.frontiersin.org/articles/10.3389/fnins.2020.559858/full#supplementary-material | dc.relation.isSupplementedBy |
xmlui.metadata.uulm.OAfunding | Open-Access-Förderung durch die Universität Ulm | uulm.OAfunding |