Most models in cognitive and computational neuroscience trained on one
subject do not generalize to other subjects due to individual differences. An
ideal individual-to-individual neural converter is expected to generate real
neural signals of one subject from those of another one, which can overcome the
problem of individual differences for cognitive and computational models. In
this study, we propose a novel individual-to-individual EEG converter, called
EEG2EEG, inspired by generative models in computer vision. We applied THINGS
EEG2 dataset to train and test 72 independent EEG2EEG models corresponding to
72 pairs across 9 subjects. Our results demonstrate that EEG2EEG is able to
effectively learn the mapping of neural representations in EEG signals from one
subject to another and achieve high conversion performance. Additionally, the
generated EEG signals contain clearer representations of visual information
than that can be obtained from real data. This method establishes a novel and
state-of-the-art framework for neural conversion of EEG signals, which can
realize a flexible and high-performance mapping from individual to individual
and provide insight for both neural engineering and cognitive neuroscience.Comment: Proceedings of the 45th Annual Meeting of the Cognitive Science
Society (CogSci 2023