Abstract:
The brain–machine interface has been an integral component of the metaverse since the inception of the latter, in his classic science fiction novel “True Names,” Vernor Vinge, the American mathematician and computer science Professor, describes a virtual world that can be accessed and experienced
via a brain–machine interface. Following the introduction of this idea, the science fiction novel “Avalanche” formally proposed the concept of a metaverse, where a virtual world constructed by humans using digital technology can be mapped onto and interact with the real world. Large companies such as Meta, Apple, Sony, Microsoft, and Samsung have launched new metaverse-related hardware and software products. Domestic giants such as Tencent, Alibaba, and Baidu have also integrated themselves into the metaverse, confirming its future development and commercial value. Goldman Sachs estimates that trillions of dollars will be invested in the development of the metaverse over the next few years. As the focus of metaverse research shifts toward content exchange and social interaction, the issue of addressing the current bottlenecks in audiovisual media interaction has become an urgent matter, and the brain–computer interface is one of its solutions. Brain–computer interfaces are becoming increasingly complex. As a physiological signal acquisition tool, it has demonstrated indispensable application potential in numerous fields of the metaverse. A non-invasive brain–computer interface possesses the advantages of being easy to obtain and having good performance and accuracy. It is the preferred method for detecting brain signals in brain–computer interfaces. The Electroencephalogram is a unique physiological signal conducive to reflecting people's psychological state. By reading and categorizing the relevant papers in the paper database, including Web of Science, CNKI, IEL, and ACM Digital Library; investigating the products and functional parameters of Neuralink, Synchron, OpenBCI, and Emotiv; studying three application scenarios, namely, the generative art in the metaverse art, the serious game of medicine and healthcare in the medical metaverse, and the application status of the brain–machine interface in virtual human expression synthesis in the social metaverse; and by investigating the existing commercial products and patents (MindWave Mobile, GVS, Galea), this paper discusses the challenges and potential problems that brain–computer interfaces may face with their widespread use by drawing parallels with the development process of network and neural security and bioethics. Furthermore, the possibility of in-depth and diverse applications of brain–computer interfaces in the future is explored, for instance, the use of sensory simulation technology to simulate olfactory sensation, gustatory sense, and tactile sensation, and the use of motor imagery to assist disabled people in participating in the metaverse.