63,268 research outputs found

    Enhancement of group perception via a collaborative brain-computer interface

    Get PDF
    Objective: We aimed at improving group performance in a challenging visual search task via a hybrid collaborative brain-computer interface (cBCI). Methods: Ten participants individually undertook a visual search task where a display was presented for 250 ms, and they had to decide whether a target was present or not. Local temporal correlation common spatial pattern (LTCCSP) was used to extract neural features from response-and stimulus-locked EEG epochs. The resulting feature vectorswere extended by including response times and features extracted from eye movements. A classifier was trained to estimate the confidence of each group member. cBCI-assisted group decisions were then obtained using a confidence-weighted majority vote. Results: Participants were combined in groups of different sizes to assess the performance of the cBCI. Results show that LTCCSP neural features, response times, and eye movement features significantly improve the accuracy of the cBCI over what we achieved with previous systems. For most group sizes, our hybrid cBCI yields group decisions that are significantly better than majority-based group decisions. Conclusion: The visual task considered here was much harder than a task we used in previous research. However, thanks to a range of technological enhancements, our cBCI has delivered a significant improvement over group decisions made by a standard majority vote. Significance: With previous cBCIs, groups may perform better than single non-BCI users. Here, cBCI-assisted groups are more accurate than identically sized non-BCI groups. This paves the way to a variety of real-world applications of cBCIs where reducing decision errors is vital

    Improved targeting through collaborative decision-making and brain computer interfaces

    Get PDF
    This paper reports a first step toward a brain-computer interface (BCI) for collaborative targeting. Specifically, we explore, from a broad perspective, how the collaboration of a group of people can increase the performance on a simple target identification task. To this end, we requested a group of people to identify the location and color of a sequence of targets appearing on the screen, and measured the time and the accuracy of the response. The individual results are compared to a collective identification result determined by simple majority voting, with random choice in case of drawn. The results are promising, as the identification becomes significantly more reliable even with this simple voting, and with a small number of people (either odd or even) involved in the decision. In addition, the paper briefly analyzes the role of brain-computer interfaces in collaborative targeting, extending the targeting task by using a BCI instead of a mechanical response

    CES-533: Analysis of the Event-related Potentials induced by cuts in feature movies and evaluation of the possibility of using such ERPs for understanding the effects of cuts on viewers

    Get PDF
    In this paper, we analyse the Event-Related Potentials (ERPs) produced by cuts where the scenes before and after the cut are narratively related. In tests with 6 participants and 930 cuts from 5 Hollywood feature movies we found that cuts produce a large negative ERP with an onset 100 ms after a cut and a duration of 600 ms, distributed over a very large region of the scalp. The real-world nature of the stimuli makes it hard to characterise the effects of cuts on a trial-by-trial basis. However, we found that aggregating data across all electrodes and averaging the ERPs elicited by cuts across all participants (a technique we borrowed from collaborative brain-computer interfaces) produced more reliable information. In particular we were able to reveal a relationship between the length of shots and the amplitude of the corresponding ERP with longer scenes producing bigger amplitudes. We also found that amplitudes vary across and within movies, most likely as a consequence of movie directors and editors using different choices of cutting techniques. In the future, we will explore the possibility of turning these ?findings into a collaborative brain-computer interface for aiding test screening by evaluating whether specific cuts have their intended effect on viewers

    From Big Data to Big Displays: High-Performance Visualization at Blue Brain

    Full text link
    Blue Brain has pushed high-performance visualization (HPV) to complement its HPC strategy since its inception in 2007. In 2011, this strategy has been accelerated to develop innovative visualization solutions through increased funding and strategic partnerships with other research institutions. We present the key elements of this HPV ecosystem, which integrates C++ visualization applications with novel collaborative display systems. We motivate how our strategy of transforming visualization engines into services enables a variety of use cases, not only for the integration with high-fidelity displays, but also to build service oriented architectures, to link into web applications and to provide remote services to Python applications.Comment: ISC 2017 Visualization at Scale worksho

    The Case for a Mixed-Initiative Collaborative Neuroevolution Approach

    Get PDF
    It is clear that the current attempts at using algorithms to create artificial neural networks have had mixed success at best when it comes to creating large networks and/or complex behavior. This should not be unexpected, as creating an artificial brain is essentially a design problem. Human design ingenuity still surpasses computational design for most tasks in most domains, including architecture, game design, and authoring literary fiction. This leads us to ask which the best way is to combine human and machine design capacities when it comes to designing artificial brains. Both of them have their strengths and weaknesses; for example, humans are much too slow to manually specify thousands of neurons, let alone the billions of neurons that go into a human brain, but on the other hand they can rely on a vast repository of common-sense understanding and design heuristics that can help them perform a much better guided search in design space than an algorithm. Therefore, in this paper we argue for a mixed-initiative approach for collaborative online brain building and present first results towards this goal.Comment: Presented at WebAL-1: Workshop on Artificial Life and the Web 2014 (arXiv:1406.2507

    "Involving Interface": An Extended Mind Theoretical Approach to Roboethics

    Get PDF
    In 2008 the authors held Involving Interface, a lively interdisciplinary event focusing on issues of biological, sociocultural, and technological interfacing (see Acknowledgments). Inspired by discussions at this event, in this article, we further discuss the value of input from neuroscience for developing robots and machine interfaces, and the value of philosophy, the humanities, and the arts for identifying persistent links between human interfacing and broader ethical concerns. The importance of ongoing interdisciplinary debate and public communication on scientific and technical advances is also highlighted. Throughout, the authors explore the implications of the extended mind hypothesis for notions of moral accountability and robotics

    Heterogeneous data fusion for brain psychology applications

    No full text
    This thesis aims to apply Empirical Mode Decomposition (EMD), Multiscale Entropy (MSE), and collaborative adaptive filters for the monitoring of different brain consciousness states. Both block based and online approaches are investigated, and a possible extension to the monitoring and identification of Electromyograph (EMG) states is provided. Firstly, EMD is employed as a multiscale time-frequency data driven tool to decompose a signal into a number of band-limited oscillatory components; its data driven nature makes EMD an ideal candidate for the analysis of nonlinear and non-stationary data. This methodology is further extended to process multichannel real world data, by making use of recent theoretical advances in complex and multivariate EMD. It is shown that this can be used to robustly measure higher order features in multichannel recordings to robustly indicate ‘QBD’. In the next stage, analysis is performed in an information theory setting on multiple scales in time, using MSE. This enables an insight into the complexity of real world recordings. The results of the MSE analysis and the corresponding statistical analysis show a clear difference in MSE between the patients in different brain consciousness states. Finally, an online method for the assessment of the underlying signal nature is studied. This method is based on a collaborative adaptive filtering approach, and is shown to be able to approximately quantify the degree of signal nonlinearity, sparsity, and non-circularity relative to the constituent subfilters. To further illustrate the usefulness of the proposed data driven multiscale signal processing methodology, the final case study considers a human-robot interface based on a multichannel EMG analysis. A preliminary analysis shows that the same methodology as that applied to the analysis of brain cognitive states gives robust and accurate results. The analysis, simulations, and the scope of applications presented suggest great potential of the proposed multiscale data processing framework for feature extraction in multichannel data analysis. Directions for future work include further development of real-time feature map approaches and their use across brain-computer and brain-machine interface applications
    corecore