33,731 research outputs found

    Attention Allocation Aid for Visual Search

    Full text link
    This paper outlines the development and testing of a novel, feedback-enabled attention allocation aid (AAAD), which uses real-time physiological data to improve human performance in a realistic sequential visual search task. Indeed, by optimizing over search duration, the aid improves efficiency, while preserving decision accuracy, as the operator identifies and classifies targets within simulated aerial imagery. Specifically, using experimental eye-tracking data and measurements about target detectability across the human visual field, we develop functional models of detection accuracy as a function of search time, number of eye movements, scan path, and image clutter. These models are then used by the AAAD in conjunction with real time eye position data to make probabilistic estimations of attained search accuracy and to recommend that the observer either move on to the next image or continue exploring the present image. An experimental evaluation in a scenario motivated from human supervisory control in surveillance missions confirms the benefits of the AAAD.Comment: To be presented at the ACM CHI conference in Denver, Colorado in May 201

    User-centered visual analysis using a hybrid reasoning architecture for intensive care units

    Get PDF
    One problem pertaining to Intensive Care Unit information systems is that, in some cases, a very dense display of data can result. To ensure the overview and readability of the increasing volumes of data, some special features are required (e.g., data prioritization, clustering, and selection mechanisms) with the application of analytical methods (e.g., temporal data abstraction, principal component analysis, and detection of events). This paper addresses the problem of improving the integration of the visual and analytical methods applied to medical monitoring systems. We present a knowledge- and machine learning-based approach to support the knowledge discovery process with appropriate analytical and visual methods. Its potential benefit to the development of user interfaces for intelligent monitors that can assist with the detection and explanation of new, potentially threatening medical events. The proposed hybrid reasoning architecture provides an interactive graphical user interface to adjust the parameters of the analytical methods based on the users' task at hand. The action sequences performed on the graphical user interface by the user are consolidated in a dynamic knowledge base with specific hybrid reasoning that integrates symbolic and connectionist approaches. These sequences of expert knowledge acquisition can be very efficient for making easier knowledge emergence during a similar experience and positively impact the monitoring of critical situations. The provided graphical user interface incorporating a user-centered visual analysis is exploited to facilitate the natural and effective representation of clinical information for patient care

    Object Segmentation in Images using EEG Signals

    Get PDF
    This paper explores the potential of brain-computer interfaces in segmenting objects from images. Our approach is centered around designing an effective method for displaying the image parts to the users such that they generate measurable brain reactions. When an image region, specifically a block of pixels, is displayed we estimate the probability of the block containing the object of interest using a score based on EEG activity. After several such blocks are displayed, the resulting probability map is binarized and combined with the GrabCut algorithm to segment the image into object and background regions. This study shows that BCI and simple EEG analysis are useful in locating object boundaries in images.Comment: This is a preprint version prior to submission for peer-review of the paper accepted to the 22nd ACM International Conference on Multimedia (November 3-7, 2014, Orlando, Florida, USA) for the High Risk High Reward session. 10 page

    Enhancement of group perception via a collaborative brain-computer interface

    Get PDF
    Objective: We aimed at improving group performance in a challenging visual search task via a hybrid collaborative brain-computer interface (cBCI). Methods: Ten participants individually undertook a visual search task where a display was presented for 250 ms, and they had to decide whether a target was present or not. Local temporal correlation common spatial pattern (LTCCSP) was used to extract neural features from response-and stimulus-locked EEG epochs. The resulting feature vectorswere extended by including response times and features extracted from eye movements. A classifier was trained to estimate the confidence of each group member. cBCI-assisted group decisions were then obtained using a confidence-weighted majority vote. Results: Participants were combined in groups of different sizes to assess the performance of the cBCI. Results show that LTCCSP neural features, response times, and eye movement features significantly improve the accuracy of the cBCI over what we achieved with previous systems. For most group sizes, our hybrid cBCI yields group decisions that are significantly better than majority-based group decisions. Conclusion: The visual task considered here was much harder than a task we used in previous research. However, thanks to a range of technological enhancements, our cBCI has delivered a significant improvement over group decisions made by a standard majority vote. Significance: With previous cBCIs, groups may perform better than single non-BCI users. Here, cBCI-assisted groups are more accurate than identically sized non-BCI groups. This paves the way to a variety of real-world applications of cBCIs where reducing decision errors is vital

    Collaborative Brain-Computer Interface for Human Interest Detection in Complex and Dynamic Settings

    Full text link
    Humans can fluidly adapt their interest in complex environments in ways that machines cannot. Here, we lay the groundwork for a real-world system that passively monitors and merges neural correlates of visual interest across team members via Collaborative Brain Computer Interface (cBCI). When group interest is detected and co-registered in time and space, it can be used to model the task relevance of items in a dynamic, natural environment. Previous work in cBCIs focuses on static stimuli, stimulus- or response- locked analyses, and often within-subject and experiment model training. The contributions of this work are twofold. First, we test the utility of cBCI on a scenario that more closely resembles natural conditions, where subjects visually scanned a video for target items in a virtual environment. Second, we use an experiment-agnostic deep learning model to account for the real-world use case where no training set exists that exactly matches the end-users task and circumstances. With our approach we show improved performance as the number of subjects in the cBCI ensemble grows, and the potential to reconstruct ground-truth target occurrence in an otherwise noisy and complex environment.Comment: 6 pages, 6 figure

    How do interactive tabletop systems influence collaboration?

    Get PDF
    This paper examines some aspects of the usefulness of interactive tabletop systems, if and how these impact collaboration. We chose creative problem solving such as brainstorming as an application framework to test several collaborative media: the use of pen-and-paper tools, the ‘‘around-the-table’’ form factor, the digital tabletop interface, the attractiveness of interaction styles. Eighty subjects in total (20 groups of four members) participated in the experiments. The evaluation criteria were task performance, collaboration patterns (especially equity of contributions), and users’ subjective experience. The ‘‘aroundthe-table’’ form factor, which is hypothesized to promote social comparison, increased performance and improved collaboration through an increase of equity. Moreover, the attractiveness of the tabletop device improved subjective experience and increased motivation to engage in the task. However, designing attractiveness seems a highly challenging issue, since overly attractive interfaces may distract users from the task
    corecore