11,783 research outputs found

    Brain-Switches for Asynchronous Brain−Computer Interfaces: A Systematic Review

    Get PDF
    A brain–computer interface (BCI) has been extensively studied to develop a novel communication system for disabled people using their brain activities. An asynchronous BCI system is more realistic and practical than a synchronous BCI system, in that, BCI commands can be generated whenever the user wants. However, the relatively low performance of an asynchronous BCI system is problematic because redundant BCI commands are required to correct false-positive operations. To significantly reduce the number of false-positive operations of an asynchronous BCI system, a two-step approach has been proposed using a brain-switch that first determines whether the user wants to use an asynchronous BCI system before the operation of the asynchronous BCI system. This study presents a systematic review of the state-of-the-art brain-switch techniques and future research directions. To this end, we reviewed brain-switch research articles published from 2000 to 2019 in terms of their (a) neuroimaging modality, (b) paradigm, (c) operation algorithm, and (d) performance

    Augmenting Sensorimotor Control Using “Goal-Aware” Vibrotactile Stimulation during Reaching and Manipulation Behaviors

    Get PDF
    We describe two sets of experiments that examine the ability of vibrotactile encoding of simple position error and combined object states (calculated from an optimal controller) to enhance performance of reaching and manipulation tasks in healthy human adults. The goal of the first experiment (tracking) was to follow a moving target with a cursor on a computer screen. Visual and/or vibrotactile cues were provided in this experiment, and vibrotactile feedback was redundant with visual feedback in that it did not encode any information above and beyond what was already available via vision. After only 10 minutes of practice using vibrotactile feedback to guide performance, subjects tracked the moving target with response latency and movement accuracy values approaching those observed under visually guided reaching. Unlike previous reports on multisensory enhancement, combining vibrotactile and visual feedback of performance errors conferred neither positive nor negative effects on task performance. In the second experiment (balancing), vibrotactile feedback encoded a corrective motor command as a linear combination of object states (derived from a linear-quadratic regulator implementing a trade-off between kinematic and energetic performance) to teach subjects how to balance a simulated inverted pendulum. Here, the tactile feedback signal differed from visual feedback in that it provided information that was not readily available from visual feedback alone. Immediately after applying this novel “goal-aware” vibrotactile feedback, time to failure was improved by a factor of three. Additionally, the effect of vibrotactile training persisted after the feedback was removed. These results suggest that vibrotactile encoding of appropriate combinations of state information may be an effective form of augmented sensory feedback that can be applied, among other purposes, to compensate for lost or compromised proprioception as commonly observed, for example, in stroke survivors

    Defining brain–machine interface applications by matching interface performance with device requirements

    Get PDF
    Interaction with machines is mediated by human-machine interfaces (HMIs). Brain-machine interfaces (BMIs) are a particular class of HMIs and have so far been studied as a communication means for people who have little or no voluntary control of muscle activity. In this context, low-performing interfaces can be considered as prosthetic applications. On the other hand, for able-bodied users, a BMI would only be practical if conceived as an augmenting interface. In this paper, a method is introduced for pointing out effective combinations of interfaces and devices for creating real-world applications. First, devices for domotics, rehabilitation and assistive robotics, and their requirements, in terms of throughput and latency, are described. Second, HMIs are classified and their performance described, still in terms of throughput and latency. Then device requirements are matched with performance of available interfaces. Simple rehabilitation and domotics devices can be easily controlled by means of BMI technology. Prosthetic hands and wheelchairs are suitable applications but do not attain optimal interactivity. Regarding humanoid robotics, the head and the trunk can be controlled by means of BMIs, while other parts require too much throughput. Robotic arms, which have been controlled by means of cortical invasive interfaces in animal studies, could be the next frontier for non-invasive BMIs. Combining smart controllers with BMIs could improve interactivity and boost BMI applications. © 2007 Elsevier B.V. All rights reserved

    Three levels of metric for evaluating wayfinding

    Get PDF
    Three levels of virtual environment (VE) metric are proposed, based on: (1) users’ task performance (time taken, distance traveled and number of errors made), (2) physical behavior (locomotion, looking around, and time and error classification), and (3) decision making (i.e., cognitive) rationale (think aloud, interview and questionnaire). Examples of the use of these metrics are drawn from a detailed review of research into VE wayfinding. A case study from research into the fidelity that is required for efficient VE wayfinding is presented, showing the unsuitability in some circumstances of common metrics of task performance such as time and distance, and the benefits to be gained by making fine-grained analyses of users’ behavior. Taken as a whole, the article highlights the range of techniques that have been successfully used to evaluate wayfinding and explains in detail how some of these techniques may be applied

    Using neurophysiological signals that reflect cognitive or affective state: Six recommendations to avoid common pitfalls

    Get PDF
    Estimating cognitive or affective state from neurophysiological signals and designing applications that make use of this information requires expertise in many disciplines such as neurophysiology, machine learning, experimental psychology, and human factors. This makes it difficult to perform research that is strong in all its aspects as well as to judge a study or application on its merits. On the occasion of the special topic “Using neurophysiological signals that reflect cognitive or affective state” we here summarize often occurring pitfalls and recommendations on how to avoid them, both for authors (researchers) and readers. They relate to defining the state of interest, the neurophysiological processes that are expected to be involved in the state of interest, confounding factors, inadvertently “cheating” with classification analyses, insight on what underlies successful state estimation, and finally, the added value of neurophysiological measures in the context of an application. We hope that this paper will support the community in producing high quality studies and well-validated, useful applications

    A Scenario Analysis of Wearable Interface Technology Foresight

    Get PDF
    Although the importance and value of wearable interface have gradually being realized, wearable interface related technologies and the priority of adopting these technologies have so far not been clearly recognized. To fill this gap, this paper focuses on the technology planning strategy of organizations that have an interest in developing or adopting wearable interface related technologies. Based on the scenario analysis approach, a technology planning strategy is proposed. In this analysis, thirty wearable interface technologies are classified into six categories, and the importance and risk factors of these categories are then evaluated under two possible scenarios. The main research findings include the discovery that most brain based wearable interface technologies are rated high to medium importance and high risk in all scenarios, and that scenario changes will have less impact on voice based as well as gesture based wearable interface technologies. These results provide a reference for organizations and vendors interested in adopting or developing wearable interface technologies

    A new multisensor software architecture for movement detection: Preliminary study with people with cerebral palsy

    Get PDF
    A five-layered software architecture translating movements into mouse clicks has been developed and tested on an Arduino platform with two different sensors: accelerometer and flex sensor. The archi-tecture comprises low-pass and derivative filters, an unsupervised classifier that adapts continuously to the strength of the user's movements and a finite state machine which sets up a timer to prevent in-voluntary movements from triggering false positives. Four people without disabilities and four people with cerebral palsy (CP) took part in the experi-ments. People without disabilities obtained an average of 100% and 99.3% in precision and true positive rate (TPR) respectively and there were no statistically significant differences among type of sensors and placement. In the same experiment, people with disabilities obtained 97.9% and 100% in precision and TPR respectively. However, these results worsened when subjects used the system to access a commu-nication board, 89.6% and 94.8% respectively. With their usual method of access-an adapted switch- they obtained a precision and TPR of 86.7% and 97.8% respectively. For 3-outof- 4 participants with disabilities our system detected the movement faster than the switch. For subjects with CP, the accelerometer was the easiest to use because it is more sensitive to gross motor motion than the flex sensor which requires more complex movements. A final survey showed that 3-out-of-4 participants with disabilities would prefer to use this new technology instead of their tra-ditional method of access
    • 

    corecore