8 research outputs found

    Exploring EEG for Object Detection and Retrieval

    Get PDF
    This paper explores the potential for using Brain Computer Interfaces (BCI) as a relevance feedback mechanism in content-based image retrieval. We investigate if it is possible to capture useful EEG signals to detect if relevant objects are present in a dataset of realistic and complex images. We perform several experiments using a rapid serial visual presentation (RSVP) of images at different rates (5Hz and 10Hz) on 8 users with different degrees of familiarization with BCI and the dataset. We then use the feedback from the BCI and mouse-based interfaces to retrieve localized objects in a subset of TRECVid images. We show that it is indeed possible to detect such objects in complex images and, also, that users with previous knowledge on the dataset or experience with the RSVP outperform others. When the users have limited time to annotate the images (100 seconds in our experiments) both interfaces are comparable in performance. Comparing our best users in a retrieval task, we found that EEG-based relevance feedback outperforms mouse-based feedback. The realistic and complex image dataset differentiates our work from previous studies on EEG for image retrieval.Comment: This preprint is the full version of a short paper accepted in the ACM International Conference on Multimedia Retrieval (ICMR) 2015 (Shanghai, China

    Object Segmentation in Images using EEG Signals

    Get PDF
    This paper explores the potential of brain-computer interfaces in segmenting objects from images. Our approach is centered around designing an effective method for displaying the image parts to the users such that they generate measurable brain reactions. When an image region, specifically a block of pixels, is displayed we estimate the probability of the block containing the object of interest using a score based on EEG activity. After several such blocks are displayed, the resulting probability map is binarized and combined with the GrabCut algorithm to segment the image into object and background regions. This study shows that BCI and simple EEG analysis are useful in locating object boundaries in images.Comment: This is a preprint version prior to submission for peer-review of the paper accepted to the 22nd ACM International Conference on Multimedia (November 3-7, 2014, Orlando, Florida, USA) for the High Risk High Reward session. 10 page

    Improving object segmentation by using EEG signals and rapid serial visual presentation

    Get PDF
    This paper extends our previous work on the potential of EEG-based brain computer interfaces to segment salient objects in images. The proposed system analyzes the Event Related Potentials (ERP) generated by the rapid serial visual presentation of windows on the image. The detection of the P300 signal allows estimating a saliency map of the image, which is used to seed a semi-supervised object segmentation algorithm. Thanks to the new contributions presented in this work, the average Jaccard index was improved from 0.470.47 to 0.660.66 when processed in our publicly available dataset of images, object masks and captured EEG signals. This work also studies alternative architectures to the original one, the impact of object occupation in each image window, and a more robust evaluation based on statistical analysis and a weighted F-score

    Enhancement of group perception via a collaborative brain-computer interface

    Get PDF
    Objective: We aimed at improving group performance in a challenging visual search task via a hybrid collaborative brain-computer interface (cBCI). Methods: Ten participants individually undertook a visual search task where a display was presented for 250 ms, and they had to decide whether a target was present or not. Local temporal correlation common spatial pattern (LTCCSP) was used to extract neural features from response-and stimulus-locked EEG epochs. The resulting feature vectorswere extended by including response times and features extracted from eye movements. A classifier was trained to estimate the confidence of each group member. cBCI-assisted group decisions were then obtained using a confidence-weighted majority vote. Results: Participants were combined in groups of different sizes to assess the performance of the cBCI. Results show that LTCCSP neural features, response times, and eye movement features significantly improve the accuracy of the cBCI over what we achieved with previous systems. For most group sizes, our hybrid cBCI yields group decisions that are significantly better than majority-based group decisions. Conclusion: The visual task considered here was much harder than a task we used in previous research. However, thanks to a range of technological enhancements, our cBCI has delivered a significant improvement over group decisions made by a standard majority vote. Significance: With previous cBCIs, groups may perform better than single non-BCI users. Here, cBCI-assisted groups are more accurate than identically sized non-BCI groups. This paves the way to a variety of real-world applications of cBCIs where reducing decision errors is vital

    Support vector machines to detect physiological patterns for EEG and EMG-based human-computer interaction:a review

    Get PDF
    Support vector machines (SVMs) are widely used classifiers for detecting physiological patterns in human-computer interaction (HCI). Their success is due to their versatility, robustness and large availability of free dedicated toolboxes. Frequently in the literature, insufficient details about the SVM implementation and/or parameters selection are reported, making it impossible to reproduce study analysis and results. In order to perform an optimized classification and report a proper description of the results, it is necessary to have a comprehensive critical overview of the applications of SVM. The aim of this paper is to provide a review of the usage of SVM in the determination of brain and muscle patterns for HCI, by focusing on electroencephalography (EEG) and electromyography (EMG) techniques. In particular, an overview of the basic principles of SVM theory is outlined, together with a description of several relevant literature implementations. Furthermore, details concerning reviewed papers are listed in tables and statistics of SVM use in the literature are presented. Suitability of SVM for HCI is discussed and critical comparisons with other classifiers are reported

    An analysis of EEG signals present during target search

    Get PDF
    Recent proof-of-concept research has appeared highlighting the applicability of using Brain Computer Interface (BCI) technology to utilise a subjects visual system to classify images. This technique involves classifying a users EEG (Electroencephalography) signals as they view images presented on a screen. The premise is that images (targets) that arouse a subjects attention generate distinct brain responses, and these brain responses can then be used to label the images. Research thus far in this domain has focused on examining the tasks and paradigms that can be used to elicit these neurologically informative signals from images, and the correlates of human perception that modulate them. While success has been shown in detecting these responses in high speed presentation paradigms, there is still an open question as to what search tasks can ultimately benefit from using an EEG based BCI system. In this thesis we explore: (1) the neural signals present during visual search tasks that require eye movements, and how they inform us of the possibilities for BCI applica- tions utilising eye tracking and EEG in combination with each other, (2) how temporal characteristics of eye movements can give indication of the suitability of a search task to being augmented by an EEG based BCI system, (3) the characteristics of a number of paradigms that can be used to elicit informative neural responses to drive image search BCI applications. In this thesis we demonstrate EEG signals can be used in a discriminative manner to label images. In addition, we find in certain instances, that signals derived from sources such as eye movements can yield significantly more discriminative information
    corecore