Impact of land use and land cover changes on carbon storage in rubber dom...
Enhancement effect on photoelectric conversion efficiency of plasmon-indu...
A 27-W continuous-wave fiber-coupled green diode laser based on TO-can-pa...
A numerical simulation of a near-infrared three-channel trace ammonia det...
High gain and low dark current AlInAsSb avalanche photodiodes grown by qu...
Intrinsic effect of interfacial coupling on the high-frequency intralayer...
Anomalous circular photogalvanic effect in p-GaAs
Dynamical polarization function and plasmons in monolayer XSe (X = In, Ga)
Influence of low-temperature GaN-Cap layer thickness on the InGaN/GaN mul...
A Compact High-Quality Image Demosaicking Neural Network for Edge-Computi...

A Spatially-Coded Visual Brain-Computer Interface for Flexible Visual Spatial Information Decoding



Author(s): Chen, JJ (Chen, Jingjing); Wang, YJ (Wang, Yijun); Maye, A (Maye, Alexander); Hong, B (Hong, Bo); Gao, XR (Gao, Xiaorong); Engel, AK (Engel, Andreas K.); Zhang, D (Zhang, Dan)

Source: IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING Volume: 29 Pages: 926-933 DOI: 10.1109/TNSRE.2021.3080045 Published: 2021

Abstract: Conventional visual BCIs, in which control channels are tagged with stimulation patterns to elicit distinguishable brain patterns, has made impressive progress in terms of the information transfer rates (ITRs). However, less development has been seen with respect to user experience and complexity of the technical setup. The requirement to tag each of targets by a unique stimulus substantially limits the flexibility of conventional visual BCI systems. A method for decoding the targets in the environment flexibly was therefore proposed in the present study. A BCI speller with thirteen symbols drawn on paper was developed. The symbols were interspersed with four flickers with distinct frequencies, but the user did not have to gaze at flickers. Rather, subjects could spell a sequence by looking at the symbols on the paper. In a cue-guided spelling task, the average offline and online accuracies reached 89.3 +/- 7.3% and 90.3 +/- 6.9% for 13 subjects, corresponding to ITRs of 43.0 +/- 7.4 bit/min and 43.8 +/- 6.8 bit/min. In an additional free-spelling task for seven out of thirteen subjects, an accuracy of 92.3 +/- 3.1% and an ITR of 45.6 +/- 3.3 bit/min were achieved. Analysis of a simulated online system showed the possibility to reach an average ITR of 105.8 bit/min by reducing the epoch duration from 4 to 1 second. Reliable BCI control is possible by gazing at targets in the environment instead of dedicated stimuli which encode control channels. The proposed method can drastically reduce the technical effort for visual BCIs and thereby advance their applications outside the laboratory.

Accession Number: WOS:000655243100001

PubMed ID: 33983885

Author Identifiers:

Author        Web of Science ResearcherID        ORCID Number

Zhang, Dan                  0000-0002-7592-3200

Hong, Bo                  0000-0003-2900-6791

ISSN: 1534-4320

eISSN: 1558-0210

Full Text:


北京市海淀区清华东路甲35号(林大北路中段) 北京912信箱 (100083)




版权所有 中国科学院半导体研究所

备案号:,京ICP备05085259-1号 京公网安备110402500052 中国科学院半导体所声明