Show simple item record

AuthorLindig-León, Ceciliadc.contributor.author
AuthorRimbert, Sébastiendc.contributor.author
AuthorBougrain, Laurentdc.contributor.author
Date of accession2021-06-10T10:02:04Zdc.date.accessioned
Available in OPARU since2021-06-10T10:02:04Zdc.date.available
Date of first publication2020-11-19dc.date.issued
AbstractMotor imagery (MI) allows the design of self-paced brain–computer interfaces (BCIs), which can potentially afford an intuitive and continuous interaction. However, the implementation of non-invasive MI-based BCIs with more than three commands is still a difficult task. First, the number of MIs for decoding different actions is limited by the constraint of maintaining an adequate spacing among the corresponding sources, since the electroencephalography (EEG) activity from near regions may add up. Second, EEG generates a rather noisy image of brain activity, which results in a poor classification performance. Here, we propose a solution to address the limitation of identifiable motor activities by using combined MIs (i.e., MIs involving 2 or more body parts at the same time). And we propose two new multilabel uses of the Common Spatial Pattern (CSP) algorithm to optimize the signal-to-noise ratio, namely MC2CMI and MC2SMI approaches. We recorded EEG signals from seven healthy subjects during an 8-class EEG experiment including the rest condition and all possible combinations using the left hand, right hand, and feet. The proposed multilabel approaches convert the original 8-class problem into a set of three binary problems to facilitate the use of the CSP algorithm. In the case of the MC2CMI method, each binary problem groups together in one class all the MIs engaging one of the three selected body parts, while the rest of MIs that do not engage the same body part are grouped together in the second class. In this way, for each binary problem, the CSP algorithm produces features to determine if the specific body part is engaged in the task or not. Finally, three sets of features are merged together to predict the user intention by applying an 8-class linear discriminant analysis. The MC2SMI method is quite similar, the only difference is that any of the combined MIs is considered during the training phase, which drastically accelerates the calibration time. For all subjects, both the MC2CMI and the MC2SMI approaches reached a higher accuracy than the classic pair-wise (PW) and one-vs.-all (OVA) methods. Our results show that, when brain activity is properly modulated, multilabel approaches represent a very interesting solution to increase the number of commands, and thus to provide a better interaction.dc.description.abstract
Languageendc.language.iso
PublisherUniversität Ulmdc.publisher
LicenseCC BY 4.0 Internationaldc.rights
Link to license texthttps://creativecommons.org/licenses/by/4.0/dc.rights.uri
Keywordcombined motor imageriesdc.subject
Keywordmultilabel classificationdc.subject
Keywordcommon spatial pattern (CSP)dc.subject
Dewey Decimal GroupDDC 000 / Computer science, information & general worksdc.subject.ddc
Dewey Decimal GroupDDC 150 / Psychologydc.subject.ddc
LCSHBrain-computer interfacesdc.subject.lcsh
TitleMulticlass classification based on combined motor imageriesdc.title
Resource typeWissenschaftlicher Artikeldc.type
SWORD Date2020-12-09T19:31:04Zdc.date.updated
VersionpublishedVersiondc.description.version
DOIhttp://dx.doi.org/10.18725/OPARU-37997dc.identifier.doi
URNhttp://nbn-resolving.de/urn:nbn:de:bsz:289-oparu-38059-2dc.identifier.urn
GNDElektroencephalogrammdc.subject.gnd
FacultyFakultät für Ingenieurwissenschaften, Informatik und Psychologieuulm.affiliationGeneral
InstitutionInterdisziplinäres Neurowissenschaftliches Forschungszentrum der Universität Ulm (NCU)uulm.affiliationSpecific
Peer reviewjauulm.peerReview
DCMI TypeCollectionuulm.typeDCMI
CategoryPublikationenuulm.category
In cooperation withUniversité de Lorraine, Franceuulm.cooperation
DOI of original publication10.3389/fnins.2020.559858dc.relation1.doi
Source - Title of sourceFrontiers in Neurosciencesource.title
Source - Place of publicationFrontiers Mediasource.publisher
Source - Volume14source.volume
Source - Year2020source.year
Source - Article number559858source.articleNumber
Source - ISSN1662-4548source.identifier.issn
Source - eISSN1662-453Xsource.identifier.eissn
Open AccessDOAJ Golduulm.OA
WoS000595120300001uulm.identifier.wos
Bibliographyuulmuulm.bibliographie
Is Supplemented Byhttps://www.frontiersin.org/articles/10.3389/fnins.2020.559858/full#supplementary-materialdc.relation.isSupplementedBy
xmlui.metadata.uulm.OAfundingOpen-Access-Förderung durch die Universität Ulmuulm.OAfunding


Files in this item

Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record