11,793 research outputs found

    Collaborative Brain-Computer Interface for Human Interest Detection in Complex and Dynamic Settings

    Full text link
    Humans can fluidly adapt their interest in complex environments in ways that machines cannot. Here, we lay the groundwork for a real-world system that passively monitors and merges neural correlates of visual interest across team members via Collaborative Brain Computer Interface (cBCI). When group interest is detected and co-registered in time and space, it can be used to model the task relevance of items in a dynamic, natural environment. Previous work in cBCIs focuses on static stimuli, stimulus- or response- locked analyses, and often within-subject and experiment model training. The contributions of this work are twofold. First, we test the utility of cBCI on a scenario that more closely resembles natural conditions, where subjects visually scanned a video for target items in a virtual environment. Second, we use an experiment-agnostic deep learning model to account for the real-world use case where no training set exists that exactly matches the end-users task and circumstances. With our approach we show improved performance as the number of subjects in the cBCI ensemble grows, and the potential to reconstruct ground-truth target occurrence in an otherwise noisy and complex environment.Comment: 6 pages, 6 figure

    Towards the automated localisation of targets in rapid image-sifting by collaborative brain-computer interfaces

    Get PDF
    The N2pc is a lateralised Event-Related Potential (ERP) that signals a shift of attention towards the location of a potential object of interest. We propose a single-trial target-localisation collaborative Brain-Computer Interface (cBCI) that exploits this ERP to automatically approximate the horizontal position of targets in aerial images. Images were presented by means of the rapid serial visual presentation technique at rates of 5, 6 and 10 Hz. We created three different cBCIs and tested a participant selection method in which groups are formed according to the similarity of participants’ performance. The N2pc that is elicited in our experiments contains information about the position of the target along the horizontal axis. Moreover, combining information from multiple participants provides absolute median improvements in the area under the receiver operating characteristic curve of up to 21% (for groups of size 3) with respect to single-user BCIs. These improvements are bigger when groups are formed by participants with similar individual performance, and much of this effect can be explained using simple theoretical models. Our results suggest that BCIs for automated triaging can be improved by integrating two classification systems: one devoted to target detection and another to detect the attentional shifts associated with lateral targets

    A Collaborative BCI Trained to Aid Group Decisions in a Visual Search Task Works Well with Similar Tasks

    Get PDF
    This study tests the possibility of using collaborative brain-computer interfaces (cBCIs) trained with EEG data collected during a decision task to enhance group performance in similar tasks

    Hybrid Collaborative Brain-Computer Interfaces to Augment Group Decision Making

    Get PDF
    Collaborative brain-computer interfaces (cBCIs) have recently been used for neuroergonomics applications, such as improving group low-level decision-making. This chapter describes a hybrid cBCI to augment group performance in two realistic target-detection tasks: visual search, where participants had to spot a polar bear in an Arctic image including many penguins, and speech perception, where volunteers listened to audio recordings affected by noise and had to decide whether or not a target word was uttered. The cBCI aggregates individual behavioral responses according to confidence estimates obtained from neural signals and response times. Results indicate that the cBCI augments group performance in both tasks over traditional groups making decision with a standard majority. Also, cBCI groups were superior to nonBCI groups using confidence values reported by the participants to weigh decisions in visual search, although the opposite was true in speech perception

    Enhancement of group perception via a collaborative brain-computer interface

    Get PDF
    Objective: We aimed at improving group performance in a challenging visual search task via a hybrid collaborative brain-computer interface (cBCI). Methods: Ten participants individually undertook a visual search task where a display was presented for 250 ms, and they had to decide whether a target was present or not. Local temporal correlation common spatial pattern (LTCCSP) was used to extract neural features from response-and stimulus-locked EEG epochs. The resulting feature vectorswere extended by including response times and features extracted from eye movements. A classifier was trained to estimate the confidence of each group member. cBCI-assisted group decisions were then obtained using a confidence-weighted majority vote. Results: Participants were combined in groups of different sizes to assess the performance of the cBCI. Results show that LTCCSP neural features, response times, and eye movement features significantly improve the accuracy of the cBCI over what we achieved with previous systems. For most group sizes, our hybrid cBCI yields group decisions that are significantly better than majority-based group decisions. Conclusion: The visual task considered here was much harder than a task we used in previous research. However, thanks to a range of technological enhancements, our cBCI has delivered a significant improvement over group decisions made by a standard majority vote. Significance: With previous cBCIs, groups may perform better than single non-BCI users. Here, cBCI-assisted groups are more accurate than identically sized non-BCI groups. This paves the way to a variety of real-world applications of cBCIs where reducing decision errors is vital

    Innovative Technologies for Global Space Exploration

    Get PDF
    Under the direction of NASA's Exploration Systems Mission Directorate (ESMD), Directorate Integration Office (DIO), The Tauri Group with NASA's Technology Assessment and Integration Team (TAIT) completed several studies and white papers that identify novel technologies for human exploration. These studies provide technical inputs to space exploration roadmaps, identify potential organizations for exploration partnerships, and detail crosscutting technologies that may meet some of NASA's critical needs. These studies are supported by a relational database of more than 400 externally funded technologies relevant to current exploration challenges. The identified technologies can be integrated into existing and developing roadmaps to leverage external resources, thereby reducing the cost of space exploration. This approach to identifying potential spin-in technologies and partnerships could apply to other national space programs, as well as international and multi-government activities. This paper highlights innovative technologies and potential partnerships from economic sectors that historically are less connected to space exploration. It includes breakthrough concepts that could have a significant impact on space exploration and discusses the role of breakthrough concepts in technology planning. Technologies and partnerships are from NASA's Technology Horizons and Technology Frontiers game-changing and breakthrough technology reports as well as the External Government Technology Dataset, briefly described in the paper. The paper highlights example novel technologies that could be spun-in from government and commercial sources, including virtual worlds, synthetic biology, and human augmentation. It will consider how these technologies can impact space exploration and will discuss ongoing activities for planning and preparing them

    Collaborative Brain-Computer Interfaces in Rapid Image Presentation and Motion Pictures

    Get PDF
    The last few years have seen an increase in brain-computer interface (BCI) research for the able-bodied population. One of these new branches involves collaborative BCIs (cBCIs), in which information from several users is combined to improve the performance of a BCI system. This thesis is focused on cBCIs with the aim of increasing understanding of how they can be used to improve performance of single-user BCIs based on event-related potentials (ERPs). The objectives are: (1) to study and compare different methods of creating groups using exclusively electroencephalography (EEG) signals, (2) to develop a theoretical model to establish where the highest gains may be expected from creating groups, and (3) to analyse the information that can be extracted by merging signals from multiple users. For this, two scenarios involving real-world stimuli (images presented at high rates and movies) were studied. The first scenario consisted of a visual search task in which images were presented at high frequencies. Three modes of combining EEG recordings from different users were tested to improve the detection of different ERPs, namely the P300 (associated with the presence of events of interest) and the N2pc (associated with shifts of attention). We showed that the detection and localisation of targets can improve significantly when information from multiple viewers is combined. In the second scenario, feature movies were introduced to study variations in ERPs in response to cuts through cBCI techniques. A distinct, previously unreported, ERP appears in relation to such cuts, the amplitude of which is not modulated by visual effects such as the low-level properties of the frames surrounding the discontinuity. However, significant variations that depended on the movie were found. We hypothesise that these techniques can be used to build on the attentional theory of cinematic continuity by providing an extra source of information: the brain

    Intelligent Computing: The Latest Advances, Challenges and Future

    Get PDF
    Computing is a critical driving force in the development of human civilization. In recent years, we have witnessed the emergence of intelligent computing, a new computing paradigm that is reshaping traditional computing and promoting digital revolution in the era of big data, artificial intelligence and internet-of-things with new computing theories, architectures, methods, systems, and applications. Intelligent computing has greatly broadened the scope of computing, extending it from traditional computing on data to increasingly diverse computing paradigms such as perceptual intelligence, cognitive intelligence, autonomous intelligence, and human-computer fusion intelligence. Intelligence and computing have undergone paths of different evolution and development for a long time but have become increasingly intertwined in recent years: intelligent computing is not only intelligence-oriented but also intelligence-driven. Such cross-fertilization has prompted the emergence and rapid advancement of intelligent computing. Intelligent computing is still in its infancy and an abundance of innovations in the theories, systems, and applications of intelligent computing are expected to occur soon. We present the first comprehensive survey of literature on intelligent computing, covering its theory fundamentals, the technological fusion of intelligence and computing, important applications, challenges, and future perspectives. We believe that this survey is highly timely and will provide a comprehensive reference and cast valuable insights into intelligent computing for academic and industrial researchers and practitioners

    Group Augmentation in Realistic Visual-Search Decisions via a Hybrid Brain-Computer Interface.

    Get PDF
    Groups have increased sensing and cognition capabilities that typically allow them to make better decisions. However, factors such as communication biases and time constraints can lead to less-than-optimal group decisions. In this study, we use a hybrid Brain-Computer Interface (hBCI) to improve the performance of groups undertaking a realistic visual-search task. Our hBCI extracts neural information from EEG signals and combines it with response times to build an estimate of the decision confidence. This is used to weigh individual responses, resulting in improved group decisions. We compare the performance of hBCI-assisted groups with the performance of non-BCI groups using standard majority voting, and non-BCI groups using weighted voting based on reported decision confidence. We also investigate the impact on group performance of a computer-mediated form of communication between members. Results across three experiments suggest that the hBCI provides significant advantages over non-BCI decision methods in all cases. We also found that our form of communication increases individual error rates by almost 50% compared to non-communicating observers, which also results in worse group performance. Communication also makes reported confidence uncorrelated with the decision correctness, thereby nullifying its value in weighing votes. In summary, best decisions are achieved by hBCI-assisted, non-communicating groups
    • …
    corecore