57,125 research outputs found

    Development of a practical and mobile brain-computer communication device for profoundly paralyzed individuals

    Full text link
    Thesis (Ph.D.)--Boston UniversityBrain-computer interface (BCI) technology has seen tremendous growth over the past several decades, with numerous groundbreaking research studies demonstrating technical viability (Sellers et al., 2010; Silvoni et al., 2011). Despite this progress, BCIs have remained primarily in controlled laboratory settings. This dissertation proffers a blueprint for translating research-grade BCI systems into real-world applications that are noninvasive and fully portable, and that employ intelligent user interfaces for communication. The proposed architecture is designed to be used by severely motor-impaired individuals, such as those with locked-in syndrome, while reducing the effort and cognitive load needed to communicate. Such a system requires the merging of two primary research fields: 1) electroencephalography (EEG)-based BCIs and 2) intelligent user interface design. The EEG-based BCI portion of this dissertation provides a history of the field, details of our software and hardware implementation, and results from an experimental study aimed at verifying the utility of a BCI based on the steady-state visual evoked potential (SSVEP), a robust brain response to visual stimulation at controlled frequencies. The visual stimulation, feature extraction, and classification algorithms for the BCI were specially designed to achieve successful real-time performance on a laptop computer. Also, the BCI was developed in Python, an open-source programming language that combines programming ease with effective handling of hardware and software requirements. The result of this work was The Unlock Project app software for BCI development. Using it, a four-choice SSVEP BCI setup was implemented and tested with five severely motor-impaired and fourteen control participants. The system showed a wide range of usability across participants, with classification rates ranging from 25-95%. The second portion of the dissertation discusses the viability of intelligent user interface design as a method for obtaining a more user-focused vocal output communication aid tailored to motor-impaired individuals. A proposed blueprint of this communication "app" was developed in this dissertation. It would make use of readily available laptop sensors to perform facial recognition, speech-to-text decoding, and geo-location. The ultimate goal is to couple sensor information with natural language processing to construct an intelligent user interface that shapes communication in a practical SSVEP-based BCI

    Utilizing Visual Attention and Inclination to Facilitate Brain-Computer Interface Design in an Amyotrophic Lateral Sclerosis Sample

    Get PDF
    Individuals who suffer from amyotrophic lateral sclerosis (ALS) have a loss of motor control and possibly the loss of speech. A brain-computer interface (BCI) provides a means for communication through nonmuscular control. Visual BCIs have shown the highest potential when compared to other modalities; nonetheless, visual attention concepts are largely ignored during the development of BCI paradigms. Additionally, individual performance differences and personal preference are not considered in paradigm development. The traditional method to discover the best paradigm for the individual user is trial and error. Visual attention research and personal preference provide the building blocks and guidelines to develop a successful paradigm. This study is an examination of a BCI-based visual attention assessment in an ALS sample. This assessment takes into account the individual’s visual attention characteristics, performance, and personal preference to select a paradigm. The resulting paradigm is optimized to the individual and then tested online against the traditional row-column paradigm. The optimal paradigm had superior performance and preference scores over row-column. These results show that the BCI needs to be calibrated to individual differences in order to obtain the best paradigm for an end user

    IMAGINE Final Report

    No full text

    Dublin City University video track experiments for TREC 2002

    Get PDF
    Dublin City University participated in the Feature Extraction task and the Search task of the TREC-2002 Video Track. In the Feature Extraction task, we submitted 3 features: Face, Speech, and Music. In the Search task, we developed an interactive video retrieval system, which incorporated the 40 hours of the video search test collection and supported user searching using our own feature extraction data along with the donated feature data and ASR transcript from other Video Track groups. This video retrieval system allows a user to specify a query based on the 10 features and ASR transcript, and the query result is a ranked list of videos that can be further browsed at the shot level. To evaluate the usefulness of the feature-based query, we have developed a second system interface that provides only ASR transcript-based querying, and we conducted an experiment with 12 test users to compare these 2 systems. Results were submitted to NIST and we are currently conducting further analysis of user performance with these 2 systems

    Using multimedia interfaces for speech therapy

    Get PDF

    The role of avatars in e-government interfaces

    Get PDF
    This paper investigates the use of avatars to communicate live message in e-government interfaces. A comparative study is presented that evaluates the contribution of multimodal metaphors (including avatars) to the usability of interfaces for e-government and user trust. The communication metaphors evaluated included text, earcons, recorded speech and avatars. The experimental platform used for the experiment involved two interface versions with a sample of 30 users. The results demonstrated that the use of multimodal metaphors in an e-government interface can significantly contribute to enhancing the usability and increase trust of users to the e-government interface. A set of design guidelines, for the use of multimodal metaphors in e-government interfaces, was also produced
    corecore