65 research outputs found

    Steering a Tractor by Means of an EMG-Based Human-Machine Interface

    Get PDF
    An electromiographic (EMG)-based human-machine interface (HMI) is a communication pathway between a human and a machine that operates by means of the acquisition and processing of EMG signals. This article explores the use of EMG-based HMIs in the steering of farm tractors. An EPOC, a low-cost human-computer interface (HCI) from the Emotiv Company, was employed. This device, by means of 14 saline sensors, measures and processes EMG and electroencephalographic (EEG) signals from the scalp of the driver. In our tests, the HMI took into account only the detection of four trained muscular events on the driver’s scalp: eyes looking to the right and jaw opened, eyes looking to the right and jaw closed, eyes looking to the left and jaw opened, and eyes looking to the left and jaw closed. The EMG-based HMI guidance was compared with manual guidance and with autonomous GPS guidance. A driver tested these three guidance systems along three different trajectories: a straight line, a step, and a circumference. The accuracy of the EMG-based HMI guidance was lower than the accuracy obtained by manual guidance, which was lower in turn than the accuracy obtained by the autonomous GPS guidance; the computed standard deviations of error to the desired trajectory in the straight line were 16 cm, 9 cm, and 4 cm, respectively. Since the standard deviation between the manual guidance and the EMG-based HMI guidance differed only 7 cm, and this difference is not relevant in agricultural steering, it can be concluded that it is possible to steer a tractor by an EMG-based HMI with almost the same accuracy as with manual steering

    Biosignal‐based human–machine interfaces for assistance and rehabilitation : a survey

    Get PDF
    As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal‐based HMIs for assistance and rehabilitation to outline state‐of‐the‐art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full‐text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever‐growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complex-ity, so their usefulness should be carefully evaluated for the specific application

    Assistente de navegação com apontador laser para conduzir cadeiras de rodas robotizadas

    Get PDF
    Orientador: Eric RohmerDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: As soluções de robótica assistida ajudam as pessoas a recuperar sua mobilidade e autonomia perdidas em suas vidas diárias. Este documento apresenta um assistente de navegação de baixo custo projetado para pessoas tetraplégicas para dirigir uma cadeira de rodas robotizada usando a combinação da orientação da cabeça e expressões faciais (sorriso e sobrancelhas para cima) para enviar comandos para a cadeira. O assistente fornece dois modos de navegação: manual e semi-autônomo. Na navegação manual, uma webcam normal com o algoritmo OpenFace detecta a orientação da cabeça do usuário e expressões faciais (sorriso, sobrancelhas para cima) para compor comandos e atuar diretamente nos movimentos da cadeira de rodas (parar, ir à frente, virar à direita, virar à esquerda). No modo semi-autônomo, o usuário controla um laser pan-tilt com a cabeça para apontar o destino desejado no solo e valida com o comando sobrancelhas para cima que faz com que a cadeira de rodas robotizada realize uma rotação seguida de um deslocamento linear para o alvo escolhido. Embora o assistente precise de melhorias, os resultados mostraram que essa solução pode ser uma tecnologia promissora para pessoas paralisadas do pescoço para controlar uma cadeira de rodas robotizadaAbstract: Assistive robotics solutions help people to recover their lost mobility and autonomy in their daily life. This document presents a low-cost navigation assistant designed for people paralyzed from down the neck to drive a robotized wheelchair using the combination of the head's posture and facial expressions (smile and eyebrows up) to send commands to the chair. The assistant provides two navigation modes: manual and semi-autonomous. In the manual navigation, a regular webcam with the OpenFace algorithm detects the user's head orientation and facial expressions (smile, eyebrows up) to compose commands and actuate directly on the wheelchair movements (stop, go front, turn right, turn left). In the semi-autonomous, the user controls a pan-tilt laser with his/her head to point the desired destination on the ground and validates with eyebrows up command which makes the robotized wheelchair performs a rotation followed by a linear displacement to the chosen target. Although the assistant need improvements, results have shown that this solution may be a promising technology for people paralyzed from down the neck to control a robotized wheelchairMestradoEngenharia de ComputaçãoMestre em Engenharia ElétricaCAPE

    EEG-TCNet: An Accurate Temporal Convolutional Network for Embedded Motor-Imagery Brain-Machine Interfaces

    Full text link
    In recent years, deep learning (DL) has contributed significantly to the improvement of motor-imagery brain-machine interfaces (MI-BMIs) based on electroencephalography(EEG). While achieving high classification accuracy, DL models have also grown in size, requiring a vast amount of memory and computational resources. This poses a major challenge to an embedded BMI solution that guarantees user privacy, reduced latency, and low power consumption by processing the data locally. In this paper, we propose EEG-TCNet, a novel temporal convolutional network (TCN) that achieves outstanding accuracy while requiring few trainable parameters. Its low memory footprint and low computational complexity for inference make it suitable for embedded classification on resource-limited devices at the edge. Experimental results on the BCI Competition IV-2a dataset show that EEG-TCNet achieves 77.35% classification accuracy in 4-class MI. By finding the optimal network hyperparameters per subject, we further improve the accuracy to 83.84%. Finally, we demonstrate the versatility of EEG-TCNet on the Mother of All BCI Benchmarks (MOABB), a large scale test benchmark containing 12 different EEG datasets with MI experiments. The results indicate that EEG-TCNet successfully generalizes beyond one single dataset, outperforming the current state-of-the-art (SoA) on MOABB by a meta-effect of 0.25.Comment: 8 pages, 6 figures, 5 table
    corecore