6,846 research outputs found

    Primitive Shape Imagery Classification from Electroencephalography

    Get PDF
    Introduction: Brain-computer interfaces (BCIs) augment traditional interfaces for human-computer interaction and provide alternative communication devices to enable the physically impaired to work. Imagined object/shape classification from electroencephalography (EEG) may lead, for example, to enhanced tools for fields such as engineering, design, and the visual arts. Evidence to support such a proposition from non-invasive neuroimaging techniques to date has mainly involved functional magnetic resonance tomography (fMRI) [1] indicating that visual perception and mental imagery show similar brain activity patterns [2] and, although the primary visual cortex has an important role in mental imagery and perception, the occipitotemporal cortex also encodes sensory, semantic and emotional properties during shape imagery [3]. Here we investigate if five imagined primitive shapes (sphere, cone, pyramid, cylinder, cube) can be classified from EEG using filter bank common spatial patterns (FBCSP) [4]. Material, Methods, and Results: Ten healthy volunteers (8 males and 2 females, aged 26-44) participated in a single session study (three runs, four blocks/run, 30 trials/block (i.e., six repetitions of five primitive shapes in random order)). Trials lasted 7s as shown in Fig. 1 and ended with an auditory tone. Thirty EEG channels were recorded with a g.BSamp EEG system using active electrodes (g.tec, Austria). EEG channels with high-level noise were removed. Signals were band-pass filtered in six non-overlapped, 4Hz width bands covering the 4-40Hz frequency range. Filter bank common spatial pattern (FBCSP) based feature extraction and mutual information (MI) based feature selection methods provided input features for 2-class classification using linear discriminant analysis (LDA) for target shape versus the rest, separately. The final 5-class classification was decided by assessing the signed distance in the 2-class discriminant hyperplane for each of the five binary classifiers as shown in Fig. 1. Classifiers were trained on two runs and tested on the one unseen run (i.e., 3 fold cross-validation). A Wilcoxon non-parametric test was used to validate the difference of DA at end of the resting period (-1s) and at the maximal peak accuracy occurring during the shape imagery task (0-3s) is significant (p<0.001). Fig. 1 shows the between-subject average time-varying classification accuracies with standard deviation (shaded area). Discussion: The results indicate that there is separability provided by the shape imagery and there is significantly higher accuracy compared to the ~20% chance level prior the display period with maximum accuracy reaching 34%. In [5] classification of five imagined primitive and complex shapes with 44% accuracy is reported using a 14 channel Emotiv headset. Differences in performance reported may be influenced by EEG recording (EEG in [5] appears to have different dynamics (significant mean shifts)), the study had more sessions/trials, applied ICA for noise removal and the participants had designer experience whilst our study did not. Improvement of our methods is required to achieve higher accuracy rate. It is unclear if an online feedback to shape imagery training and learning will an impact performance – a multisession online study with feedback is the next step in this research. Significance: To best of our knowledge this is only the second study of shape imagery classification from EEG

    Integrating Multiple Sketch Recognition Methods to Improve Accuracy and Speed

    Get PDF
    Sketch recognition is the computer understanding of hand drawn diagrams. Recognizing sketches instantaneously is necessary to build beautiful interfaces with real time feedback. There are various techniques to quickly recognize sketches into ten or twenty classes. However for much larger datasets of sketches from a large number of classes, these existing techniques can take an extended period of time to accurately classify an incoming sketch and require significant computational overhead. Thus, to make classification of large datasets feasible, we propose using multiple stages of recognition. In the initial stage, gesture-based feature values are calculated and the trained model is used to classify the incoming sketch. Sketches with an accuracy less than a threshold value, go through a second stage of geometric recognition techniques. In the second geometric stage, the sketch is segmented, and sent to shape-specific recognizers. The sketches are matched against predefined shape descriptions, and confidence values are calculated. The system outputs a list of classes that the sketch could be classified as, along with the accuracy, and precision for each sketch. This process both significantly reduces the time taken to classify such huge datasets of sketches, and increases both the accuracy and precision of the recognition

    Integrating Multiple Sketch Recognition Methods to Improve Accuracy and Speed

    Get PDF
    Sketch recognition is the computer understanding of hand drawn diagrams. Recognizing sketches instantaneously is necessary to build beautiful interfaces with real time feedback. There are various techniques to quickly recognize sketches into ten or twenty classes. However for much larger datasets of sketches from a large number of classes, these existing techniques can take an extended period of time to accurately classify an incoming sketch and require significant computational overhead. Thus, to make classification of large datasets feasible, we propose using multiple stages of recognition. In the initial stage, gesture-based feature values are calculated and the trained model is used to classify the incoming sketch. Sketches with an accuracy less than a threshold value, go through a second stage of geometric recognition techniques. In the second geometric stage, the sketch is segmented, and sent to shape-specific recognizers. The sketches are matched against predefined shape descriptions, and confidence values are calculated. The system outputs a list of classes that the sketch could be classified as, along with the accuracy, and precision for each sketch. This process both significantly reduces the time taken to classify such huge datasets of sketches, and increases both the accuracy and precision of the recognition

    Mining Human Shape Perception with Role Playing Games

    Get PDF
    Games with a purpose’ is a paradigm wheregames are designed to computationally capture the essence of theunderlying collective human conscience or commonsense thatplays a major role in decision-making. This human computingmethod ensures spontaneous participation of players who, asa byproduct of playing, provide useful data that is impossibleto generate computationally and extremely difficult to collectthrough extensive surveys. In this paper we describe a gamethat allows us to collect data on human perception of characterbody shapes. The paper describes the experimental setup, relatedgame design constraints, art creation, and data analysis. Inour interactive role-playing detective game titled Villain Ville,players are asked to characterize different versions of fullbodycolor portraits of three villain characters. They are latersupposed to correctly match their character-trait ratings toa set of characters represented only with outlines of primitivevector shapes. By transferring human intelligence tasks into coregame-play mechanics, we have successfully managed to collectmotivated data. Preliminary analysis on game data generatedby 50 secondary school students shows a convergence to somecommon perception associations between role, physicality andpersonality. We hope to harness this game to discover perceptionfor a wide variety of body-shapes to build up an intelligent shapetrait-role model, with application in tutored drawing, proceduralcharacter geometry creation and intelligent retrieval

    NOIR: Neural Signal Operated Intelligent Robots for Everyday Activities

    Full text link
    We present Neural Signal Operated Intelligent Robots (NOIR), a general-purpose, intelligent brain-robot interface system that enables humans to command robots to perform everyday activities through brain signals. Through this interface, humans communicate their intended objects of interest and actions to the robots using electroencephalography (EEG). Our novel system demonstrates success in an expansive array of 20 challenging, everyday household activities, including cooking, cleaning, personal care, and entertainment. The effectiveness of the system is improved by its synergistic integration of robot learning algorithms, allowing for NOIR to adapt to individual users and predict their intentions. Our work enhances the way humans interact with robots, replacing traditional channels of interaction with direct, neural communication. Project website: https://noir-corl.github.io/
    • …
    corecore