TY - JOUR
T1 - Co-stimulation-removed audiovisual semantic integration and modulation of attention
T2 - An event-related potential study
AU - Xi, Yang
AU - Li, Qi
AU - Gao, Ning
AU - Li, Guangjian
AU - Lin, Weihong
AU - Wu, Jinglong
N1 - Funding Information:
This research has received funding from the National Natural Science Foundation of China (grant numbers 61773076 and 61806025 ), Jilin Scientific and Technological Development Program (grant numbers 20190302072GX and 20180519012JH ), and Scientific Research Project of Jilin Provincial Department of Education during the 13th Five-Year Plan Period (grant number JJKH20190597KJ ), and the Health and Family Planning Commission of Jilin Province (grant number 3D518NP13428 ).
Funding Information:
This research has received funding from the National Natural Science Foundation of China (grant numbers 61773076 and 61806025), Jilin Scientific and Technological Development Program (grant numbers 20190302072GX and 20180519012JH), and Scientific Research Project of Jilin Provincial Department of Education during the 13th Five-Year Plan Period (grant number JJKH20190597KJ), and the Health and Family Planning Commission of Jilin Province (grant number 3D518NP13428).
Publisher Copyright:
© 2020 Elsevier B.V.
PY - 2020/5
Y1 - 2020/5
N2 - The integration of multisensory objects containing semantic information involves processing of both low-level co-stimulation and high-order semantic integration. To investigate audiovisual semantic integration, we utilized bimodal stimuli (AV, simultaneous presentation of an auditory sound and a visual picture; An, simultaneous presentation of an auditory sound and a visual noise; Vn, simultaneous presentation of a visual picture and an auditory noise; Fn, simultaneous presentation of an auditory noise and a visual noise) to remove the effect of co-stimulation integration and extract data regarding high-order semantic integration. Electroencephalography with a high temporal resolution was used to examine the neural mechanisms associated with co-stimulation-removed audiovisual semantic integration in attended and unattended conditions. By comparing the (AV + Fn) and (An+Vn), we identified three effects related to co-stimulation-removed audiovisual semantic integration. In the attended condition, two semantic integration effects over bilateral occipito-temporal regions at 220–240 ms and over frontal region at 560–600 ms were observed. In the unattended condition, only one semantic integration effect over centro-frontal region at 340–360 ms was observed. These effects reflected the semantic integration processes of pictures and sounds after removing the co-stimulation caused by spatiotemporal consistency. Moreover, the discrepancy in these effects in temporal and spatial distribution implied distinct neural mechanisms underlying attended and unattended semantic integration. In the attended condition, the audiovisual semantic information was initially integrated based on the semantic congruency (220–240 ms) and then reanalyzed according to the current task (560–600 ms), which was a goal-driven process and influenced by top-down attention. Contrastingly, in the unattended condition, no attention resources were allocated and the semantic integration (340–360 ms) was an unconscious automatic process.
AB - The integration of multisensory objects containing semantic information involves processing of both low-level co-stimulation and high-order semantic integration. To investigate audiovisual semantic integration, we utilized bimodal stimuli (AV, simultaneous presentation of an auditory sound and a visual picture; An, simultaneous presentation of an auditory sound and a visual noise; Vn, simultaneous presentation of a visual picture and an auditory noise; Fn, simultaneous presentation of an auditory noise and a visual noise) to remove the effect of co-stimulation integration and extract data regarding high-order semantic integration. Electroencephalography with a high temporal resolution was used to examine the neural mechanisms associated with co-stimulation-removed audiovisual semantic integration in attended and unattended conditions. By comparing the (AV + Fn) and (An+Vn), we identified three effects related to co-stimulation-removed audiovisual semantic integration. In the attended condition, two semantic integration effects over bilateral occipito-temporal regions at 220–240 ms and over frontal region at 560–600 ms were observed. In the unattended condition, only one semantic integration effect over centro-frontal region at 340–360 ms was observed. These effects reflected the semantic integration processes of pictures and sounds after removing the co-stimulation caused by spatiotemporal consistency. Moreover, the discrepancy in these effects in temporal and spatial distribution implied distinct neural mechanisms underlying attended and unattended semantic integration. In the attended condition, the audiovisual semantic information was initially integrated based on the semantic congruency (220–240 ms) and then reanalyzed according to the current task (560–600 ms), which was a goal-driven process and influenced by top-down attention. Contrastingly, in the unattended condition, no attention resources were allocated and the semantic integration (340–360 ms) was an unconscious automatic process.
KW - Attention
KW - Audiovisual stimuli
KW - Co-stimulation-removed
KW - ERP
KW - Semantic integration
UR - http://www.scopus.com/inward/record.url?scp=85079600957&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85079600957&partnerID=8YFLogxK
U2 - 10.1016/j.ijpsycho.2020.02.009
DO - 10.1016/j.ijpsycho.2020.02.009
M3 - Article
C2 - 32061614
AN - SCOPUS:85079600957
VL - 151
SP - 7
EP - 17
JO - International Journal of Psychophysiology
JF - International Journal of Psychophysiology
SN - 0167-8760
ER -