TY - JOUR
T1 - Machine-vision fused brain machine interface based on dynamic augmented reality visual stimulation
AU - Zhang, Deyu
AU - Liu, Siyu
AU - Wang, Kai
AU - Zhang, Jian
AU - Chen, Duanduan
AU - Zhang, Yilong
AU - Nie, Li
AU - Yang, Jiajia
AU - Shinntarou, Funabashi
AU - Wu, Jinglong
AU - Yan, Tianyi
N1 - Funding Information:
Original content from this work may be used under the terms of the . Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. National Natural Science Foundation of China http://dx.doi.org/10.13039/501100001809 61727807 82071912 U20A20191 National Key R&D Program of China 2018YFC0115400 Beijing Municipal Science and Technology Commission http://dx.doi.org/10.13039/501100009592 Z191100010618004 yes � 2021 The Author(s). Published by IOP Publishing Ltd Creative Commons Attribution 4.0 license
Publisher Copyright:
© 2021 The Author(s). Published by IOP Publishing Ltd.
PY - 2021/10
Y1 - 2021/10
N2 - Objective. Brain-machine interfaces (BMIs) interpret human intent into machine reactions, and the visual stimulation (VS) paradigm is one of the most widely used of these approaches. Although VS-based BMIs have a relatively high information transfer rate (ITR), it is still difficult for BMIs to control machines in dynamic environments (for example, grabbing a dynamic object or targeting a walking person). Approach. In this study, we utilized a BMI based on augmented reality (AR) VS (AR-VS). The proposed VS was dynamically generated based on machine vision, and human intent was interpreted by a dynamic decision time interval approach. A robot based on the coordination of a task and self-motion system was controlled by the proposed paradigm in a fast and flexible state. Methods. Objects in scenes were first recognized by machine vision and tracked by optical flow. AR-VS was generated based on the objects' parameters. The number and distribution of VS was confirmed by the recognized objects. Electroencephalogram (EEG) features corresponding to VS and human intent were collected by a dry-electrode EEG cap and determined by the filter bank canonical correlation analysis method. Key parameters in the AR-VS, including the effect of VS size, frequency, dynamic object moving speed, ITR and the performance of the BMI-controlled robot, were analyzed. Conclusion and significance. The ITR of the proposed AR-VS paradigm for nine healthy subjects was 36.3 ± 20.1 bits min-1. In the online robot control experiment, brain-controlled hybrid tasks including self-moving and grabbing objects were 64% faster than when using the traditional steady-state visual evoked potential paradigm. The proposed paradigm based on AR-VS could be optimized and adopted in other kinds of VS-based BMIs, such as P300, omitted stimulus potential, and miniature event-related potential paradigms, for better results in dynamic environments.
AB - Objective. Brain-machine interfaces (BMIs) interpret human intent into machine reactions, and the visual stimulation (VS) paradigm is one of the most widely used of these approaches. Although VS-based BMIs have a relatively high information transfer rate (ITR), it is still difficult for BMIs to control machines in dynamic environments (for example, grabbing a dynamic object or targeting a walking person). Approach. In this study, we utilized a BMI based on augmented reality (AR) VS (AR-VS). The proposed VS was dynamically generated based on machine vision, and human intent was interpreted by a dynamic decision time interval approach. A robot based on the coordination of a task and self-motion system was controlled by the proposed paradigm in a fast and flexible state. Methods. Objects in scenes were first recognized by machine vision and tracked by optical flow. AR-VS was generated based on the objects' parameters. The number and distribution of VS was confirmed by the recognized objects. Electroencephalogram (EEG) features corresponding to VS and human intent were collected by a dry-electrode EEG cap and determined by the filter bank canonical correlation analysis method. Key parameters in the AR-VS, including the effect of VS size, frequency, dynamic object moving speed, ITR and the performance of the BMI-controlled robot, were analyzed. Conclusion and significance. The ITR of the proposed AR-VS paradigm for nine healthy subjects was 36.3 ± 20.1 bits min-1. In the online robot control experiment, brain-controlled hybrid tasks including self-moving and grabbing objects were 64% faster than when using the traditional steady-state visual evoked potential paradigm. The proposed paradigm based on AR-VS could be optimized and adopted in other kinds of VS-based BMIs, such as P300, omitted stimulus potential, and miniature event-related potential paradigms, for better results in dynamic environments.
KW - augmented reality
KW - brain-machine interfaces
KW - robot control
UR - http://www.scopus.com/inward/record.url?scp=85118603540&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85118603540&partnerID=8YFLogxK
U2 - 10.1088/1741-2552/ac2c9e
DO - 10.1088/1741-2552/ac2c9e
M3 - Article
C2 - 34607320
AN - SCOPUS:85118603540
SN - 1741-2560
VL - 18
JO - Journal of Neural Engineering
JF - Journal of Neural Engineering
IS - 5
M1 - 056061
ER -