680 research outputs found

    Complex singularities and PDEs

    Full text link
    In this paper we give a review on the computational methods used to characterize the complex singularities developed by some relevant PDEs. We begin by reviewing the singularity tracking method based on the analysis of the Fourier spectrum. We then introduce other methods generally used to detect the hidden singularities. In particular we show some applications of the Pad\'e approximation, of the Kida method, and of Borel-Polya method. We apply these techniques to the study of the singularity formation of some nonlinear dispersive and dissipative one dimensional PDE of the 2D Prandtl equation, of the 2D KP equation, and to Navier-Stokes equation for high Reynolds number incompressible flows in the case of interaction with rigid boundaries

    Analysis of complex singularities in high-Reynolds-number Navier-Stokes solutions

    Full text link
    Numerical solutions of the laminar Prandtl boundary-layer and Navier-Stokes equations are considered for the case of the two-dimensional uniform flow past an impulsively-started circular cylinder. We show how Prandtl's solution develops a finite time separation singularity. On the other hand Navier-Stokes solution is characterized by the presence of two kinds of viscous-inviscid interactions that can be detected by the analysis of the enstrophy and of the pressure gradient on the wall. Moreover we apply the complex singularity tracking method to Prandtl and Navier-Stokes solutions and analyze the previous interactions from a different perspective

    Cooperative particle filtering for tracking ERP subcomponents from multichannel EEG

    Get PDF
    In this study, we propose a novel method to investigate P300 variability over different trials. The method incorporates spatial correlation between EEG channels to form a cooperative coupled particle filtering method that tracks the P300 subcomponents, P3a and P3b, over trials. Using state space systems, the amplitude, latency, and width of each subcomponent are modeled as the main underlying parameters. With four electrodes, two coupled Rao-Blackwellised particle filter pairs are used to recursively estimate the system state over trials. A number of physiological constraints are also imposed to avoid generating invalid particles in the estimation process. Motivated by the bilateral symmetry of ERPs over the brain, the channels further share their estimates with their neighbors and combine the received information to obtain a more accurate and robust solution. The proposed algorithm is capable of estimating the P300 subcomponents in single trials and outperforms its non-cooperative counterpart

    A Human-Centric Metaverse Enabled by Brain-Computer Interface: A Survey

    Full text link
    The growing interest in the Metaverse has generated momentum for members of academia and industry to innovate toward realizing the Metaverse world. The Metaverse is a unique, continuous, and shared virtual world where humans embody a digital form within an online platform. Through a digital avatar, Metaverse users should have a perceptual presence within the environment and can interact and control the virtual world around them. Thus, a human-centric design is a crucial element of the Metaverse. The human users are not only the central entity but also the source of multi-sensory data that can be used to enrich the Metaverse ecosystem. In this survey, we study the potential applications of Brain-Computer Interface (BCI) technologies that can enhance the experience of Metaverse users. By directly communicating with the human brain, the most complex organ in the human body, BCI technologies hold the potential for the most intuitive human-machine system operating at the speed of thought. BCI technologies can enable various innovative applications for the Metaverse through this neural pathway, such as user cognitive state monitoring, digital avatar control, virtual interactions, and imagined speech communications. This survey first outlines the fundamental background of the Metaverse and BCI technologies. We then discuss the current challenges of the Metaverse that can potentially be addressed by BCI, such as motion sickness when users experience virtual environments or the negative emotional states of users in immersive virtual applications. After that, we propose and discuss a new research direction called Human Digital Twin, in which digital twins can create an intelligent and interactable avatar from the user's brain signals. We also present the challenges and potential solutions in synchronizing and communicating between virtual and physical entities in the Metaverse

    Understanding minds in real-world environments : toward a mobile cognition approach

    Get PDF
    This work is supported by a scholarship from the University of Stirling and a research grant from SINAPSE (Scottish Imaging Network: A Platform for Scientific Excellence).There is a growing body of evidence that important aspects of human cognition have been marginalized, or overlooked, by traditional cognitive science. In particular, the use of laboratory-based experiments in which stimuli are artificial, and response options are fixed, inevitably results in findings that are less ecologically valid in relation to real-world behavior. In the present review we highlight the opportunities provided by a range of new mobile technologies that allow traditionally lab-bound measurements to now be collected during natural interactions with the world. We begin by outlining the theoretical support that mobile approaches receive from the development of embodied accounts of cognition, and we review the widening evidence that illustrates the importance of examining cognitive processes in their context. As we acknowledge, in practice, the development of mobile approaches brings with it fresh challenges, and will undoubtedly require innovation in paradigm design and analysis. If successful, however, the mobile cognition approach will offer novel insights in a range of areas, including understanding the cognitive processes underlying navigation through space and the role of attention during natural behavior. We argue that the development of real-world mobile cognition offers both increased ecological validity, and the opportunity to examine the interactions between perception, cognition and action—rather than examining each in isolation.Publisher PDFPeer reviewe

    Understanding Minds in Real-World Environments: Toward a Mobile Cognition Approach

    Get PDF
    There is a growing body of evidence that important aspects of human cognition have been marginalized, or overlooked, by traditional cognitive science. In particular, the use of laboratory-based experiments in which stimuli are artificial, and response options are fixed, inevitably results in findings that are less ecologically valid in relation to real-world behavior. In the present review we highlight the opportunities provided by a range of new mobile technologies that allow traditionally lab-bound measurements to now be collected during natural interactions with the world. We begin by outlining the theoretical support that mobile approaches receive from the development of embodied accounts of cognition, and we review the widening evidence that illustrates the importance of examining cognitive processes in their context. As we acknowledge, in practice, the development of mobile approaches brings with it fresh challenges, and will undoubtedly require innovation in paradigm design and analysis. If successful, however, the mobile cognition approach will offer novel insights in a range of areas, including understanding the cognitive processes underlying navigation through space and the role of attention during natural behavior. We argue that the development of real-world mobile cognition offers both increased ecological validity, and the opportunity to examine the interactions between perception, cognition and action—rather than examining each in isolation

    Enhancing brain-computer interfacing through advanced independent component analysis techniques

    No full text
    A Brain-computer interface (BCI) is a direct communication system between a brain and an external device in which messages or commands sent by an individual do not pass through the brain’s normal output pathways but is detected through brain signals. Some severe motor impairments, such as Amyothrophic Lateral Sclerosis, head trauma, spinal injuries and other diseases may cause the patients to lose their muscle control and become unable to communicate with the outside environment. Currently no effective cure or treatment has yet been found for these diseases. Therefore using a BCI system to rebuild the communication pathway becomes a possible alternative solution. Among different types of BCIs, an electroencephalogram (EEG) based BCI is becoming a popular system due to EEG’s fine temporal resolution, ease of use, portability and low set-up cost. However EEG’s susceptibility to noise is a major issue to develop a robust BCI. Signal processing techniques such as coherent averaging, filtering, FFT and AR modelling, etc. are used to reduce the noise and extract components of interest. However these methods process the data on the observed mixture domain which mixes components of interest and noise. Such a limitation means that extracted EEG signals possibly still contain the noise residue or coarsely that the removed noise also contains part of EEG signals embedded. Independent Component Analysis (ICA), a Blind Source Separation (BSS) technique, is able to extract relevant information within noisy signals and separate the fundamental sources into the independent components (ICs). The most common assumption of ICA method is that the source signals are unknown and statistically independent. Through this assumption, ICA is able to recover the source signals. Since the ICA concepts appeared in the fields of neural networks and signal processing in the 1980s, many ICA applications in telecommunications, biomedical data analysis, feature extraction, speech separation, time-series analysis and data mining have been reported in the literature. In this thesis several ICA techniques are proposed to optimize two major issues for BCI applications: reducing the recording time needed in order to speed up the signal processing and reducing the number of recording channels whilst improving the final classification performance or at least with it remaining the same as the current performance. These will make BCI a more practical prospect for everyday use. This thesis first defines BCI and the diverse BCI models based on different control patterns. After the general idea of ICA is introduced along with some modifications to ICA, several new ICA approaches are proposed. The practical work in this thesis starts with the preliminary analyses on the Southampton BCI pilot datasets starting with basic and then advanced signal processing techniques. The proposed ICA techniques are then presented using a multi-channel event related potential (ERP) based BCI. Next, the ICA algorithm is applied to a multi-channel spontaneous activity based BCI. The final ICA approach aims to examine the possibility of using ICA based on just one or a few channel recordings on an ERP based BCI. The novel ICA approaches for BCI systems presented in this thesis show that ICA is able to accurately and repeatedly extract the relevant information buried within noisy signals and the signal quality is enhanced so that even a simple classifier can achieve good classification accuracy. In the ERP based BCI application, after multichannel ICA the data just applied to eight averages/epochs can achieve 83.9% classification accuracy whilst the data by coherent averaging can reach only 32.3% accuracy. In the spontaneous activity based BCI, the use of the multi-channel ICA algorithm can effectively extract discriminatory information from two types of singletrial EEG data. The classification accuracy is improved by about 25%, on average, compared to the performance on the unpreprocessed data. The single channel ICA technique on the ERP based BCI produces much better results than results using the lowpass filter. Whereas the appropriate number of averages improves the signal to noise rate of P300 activities which helps to achieve a better classification. These advantages will lead to a reliable and practical BCI for use outside of the clinical laboratory

    A Novel Analysis of Performance Classification and Workload Prediction Using Electroencephalography (EEG) Frequency Data

    Get PDF
    Across the DOD each task an operator is presented with has some level of difficulty associated with it. This level of difficulty over the course of the task is also known as workload, where the operator is faced with varying levels of workload as he or she attempts to complete the task. The focus of the research presented in this thesis is to determine if those changes in workload can be predicted and to determine if individuals can be classified based on performance in order to prevent an increase in workload that would cause a decline in performance in a given task. Despite many efforts to predict workload and classify individuals with machine learning, the classification and predictive ability of Electroencephalography (EEG) frequency data has not been explored at the individual EEG Frequency band level. In a 711th HPW/RCHP Human Universal Measurement and Assessment Network (HUMAN) Lab study, 14 Subjects were asked to complete two tasks over 16 scenarios, while their physiological data, including EEG frequency data, was recorded to capture the physiological changes their body went through over the course of the experiment. The research presented in this thesis focuses on EEG frequency data, and its ability to predict task performance and changes in workload. Several machine learning techniques are explored in this thesis before a final technique was chosen. This thesis contributes research to the medical and machine learning fields regarding the classification and workload prediction efficacy of EEG frequency data. Specifically, it presents a novel investigation of five EEG frequencies and their individual abilities to predict task performance and workload. It was discovered that using the Gamma EEG frequency and all EEG frequencies combined to predict task performance resulted in average classification accuracies of greater than 90%
    corecore