21 research outputs found

    Upravljanje robotom pomoću anticipacijskih potencijala mozga

    Get PDF
    Recently Biomedical Engineering showed advances in using brain potentials for control of physical devices, in particular, robots. This paper is focused on controlling robots using anticipatory brain potentials. An oscillatory brain potential generated in the CNV Flip-Flop Paradigm is used to trigger sequence of robot behaviors. Experimental illustration is given in which two robotic arms, driven by a brain expectancy potential oscillation, cooperatively solve the well known problem of Towers of Hanoi.U posljednje vrijeme je u području biomedicinskog inženjerstva postignut napredak u korištenju potencijala mozga za upravljanje fizičkim napravama, posebice robotima. U radu je opisana mogućnost upravljanja robotima pomoću anticipacijskih potencijala mozga. Oscilacijski potencijal mozga generiran u CNV (Contingent Negative Variation) flip-flop paradigmi se koristi za okidanje slijeda ponašanja robota. U radu je prikazana eksperimentalna ilustracija rješavanja dobro poznatog problema Hanojskih tornjeva pomoću dvije robotske ruke upravljane moždanim potencijalom očekivanja

    Robotic Platforms for Assistance to People with Disabilities

    Get PDF
    People with congenital and/or acquired disabilities constitute a great number of dependents today. Robotic platforms to help people with disabilities are being developed with the aim of providing both rehabilitation treatment and assistance to improve their quality of life. A high demand for robotic platforms that provide assistance during rehabilitation is expected because of the health status of the world due to the COVID-19 pandemic. The pandemic has resulted in countries facing major challenges to ensure the health and autonomy of their disabled population. Robotic platforms are necessary to ensure assistance and rehabilitation for disabled people in the current global situation. The capacity of robotic platforms in this area must be continuously improved to benefit the healthcare sector in terms of chronic disease prevention, assistance, and autonomy. For this reason, research about human–robot interaction in these robotic assistance environments must grow and advance because this topic demands sensitive and intelligent robotic platforms that are equipped with complex sensory systems, high handling functionalities, safe control strategies, and intelligent computer vision algorithms. This Special Issue has published eight papers covering recent advances in the field of robotic platforms to assist disabled people in daily or clinical environments. The papers address innovative solutions in this field, including affordable assistive robotics devices, new techniques in computer vision for intelligent and safe human–robot interaction, and advances in mobile manipulators for assistive tasks

    EMG-based eye gestures recognition for hands free interfacing

    Get PDF
    This study investigates the utilization of an Electromyography (EMG) based device to recognize five eye gestures and classify them to have a hands free interaction with different applications. The proposed eye gestures in this work includes Long Blinks, Rapid Blinks, Wink Right, Wink Left and finally Squints or frowns. The MUSE headband, which is originally a Brain Computer Interface (BCI) that measures the Electroencephalography (EEG) signals, is the device used in our study to record the EMG signals from behind the earlobes via two Smart rubber sensors and at the forehead via two other electrodes. The signals are considered as EMG once they involve the physical muscular stimulations, which are considered as artifacts for the EEG Brain signals for other studies. The experiment is conducted on 15 participants (12 Males and 3 Females) randomly as no specific groups were targeted and the session was video taped for reevaluation. The experiment starts with the calibration phase to record each gesture three times per participant through a developed Voice narration program to unify the test conditions and time intervals among all subjects. In this study, a dynamic sliding window with segmented packets is designed to faster process the data and analyze it, as well as to provide more flexibility to classify the gestures regardless their duration from one user to another. Additionally, we are using the thresholding algorithm to extract the features from all the gestures. The Rapid Blinks and the Squints were having high F1 Scores of 80.77% and 85.71% for the Trained Thresholds, as well as 87.18% and 82.12% for the Default or manually adjusted thresholds. The accuracies of the Long Blinks, Rapid Blinks and Wink Left were relatively higher with the manually adjusted thresholds, while the Squints and the Wink Right were better with the trained thresholds. However, more improvements were proposed and some were tested especially after monitoring the participants actions from the video recordings to enhance the classifier. Most of the common irregularities met are discussed within this study so as to pave the road for further similar studies to tackle them before conducting the experiments. Several applications need minimal physical or hands interactions and this study was originally a part of the project at HCI Lab, University of Stuttgart to make a hands-free switching between RGB, thermal and depth cameras integrated on or embedded in an Augmented Reality device designed for the firefighters to increase their visual capabilities in the field

    Robotic Vehicle Control Using Brain Computer Interface

    Get PDF
    Brain Computer Interface (BCI) is an interfacing device that interacts with external device or computer. The principle of Brain Computer interface is based on Electroencephalography. Under the influence of external stimuli, human brain generates some responses on distinctive areas of the brain. These responses appear in the EEG signals captured from corresponding electrode positions on the scalp of the human subject. The responses appear in the EEG signal as a feature in the time domain, or a feature in the frequency domain depending upon the periodic nature of the stimuli. These features are then detected classified and then control signal is generated by an external device. This enables the subject to directly control an external device from the brain using the signals generated in response to stimulation. Brain-computer Interfcae has shown promising application to aid patients with Locked-in syndrome, Spinal Cord Injury (SCI), Acute Inflammatory Demyelinating Polyradiculoneuropathy (AIDP), Lock-down Syndrome and Amyotrophic Lateral Sclerosis (ALS). Until now, these patients needed human assistance to communicate. Brain-computer interface also has promising application for wheelchair control. Where these patients would be able to control electric wheelchairs using Brain Computer interface. In this work, a working model of Brain Computer Interface has been developed using PowerLab 16/35 and ML-138 bio-amplifier. The BCI is based on Steady State Visually Evoked Potential (SSVEP). SSVEP response is generated from the visual cortex of the subject when the subject is exposed to a flickering light source. A model robotic platform has also been controlled using the detected SSVEP Signal

    The potential of error-related potentials. Analysis and decoding for control, neuro-rehabilitation and motor substitution

    Get PDF
    Las interfaces cerebro-máquina (BMIs, por sus siglas en inglés) permiten la decodificación de patrones de activación neuronal del cerebro de los usuarios para proporcionar a personas con movilidad severamente limitada, ya sea debido a un accidente o a una enfermedad neurodegenerativa, una forma de establecer una conexión directa entre su cerebro y un dispositivo. En este sentido, las BMIs basadas en técnicas no invasivas, como el electroencefalograma (EEG) han ofrecido a estos usuarios nuevas oportunidades para recuperar el control sobre las actividades de su vida diaria que de otro modo no podrían realizar, especialmente en las áreas de comunicación y control de su entorno.En los últimos años, la tecnología está avanzando a grandes pasos y con ella la complejidad de dispositivos ha incrementado significativamente, ampliando el número de posibilidades para controlar sofisticados dispositivos robóticos, prótesis con numerosos grados de libertad o incluso para la aplicación de complejos patrones de estimulación eléctrica en las propias extremidades paralizadas de un usuario, que le permitan ejecutar movimientos precisos. Sin embargo, la cantidad de información que se puede transmitir entre el cerebro y estos dispositivos sigue siendo muy limitada, tanto por el número como por la velocidad a la que se pueden decodificar los comandos neuronales. Por lo tanto, depender únicamente de las señales neuronales no garantiza un control óptimo y preciso.Para poder sacar el máximo partido de estas tecnologías, el campo de las BMIs adoptó el conocido enfoque de “control-compartido". Esta estrategia de control pretende crear un sistema de cooperación entre el usuario y un dispositivo inteligente, liberando al usuario de las tareas más pesadas requeridas para ejecutar la tarea sin llegar a perder la sensación de estar en control. De esta manera, los usuarios solo necesitan centrar su atención en los comandos de alto nivel (por ejemplo, elegir un elemento específico que agarrar, o elegir el destino final donde moverse) mientras el agente inteligente resuelve problemas de bajo nivel (como planificación de trayectorias, esquivar obstáculos, etc.) que permitan realizar la tarea designada de la manera óptima.En particular, esta tesis gira en torno a una señal neuronal cognitiva de alto nivel originada como la falta de coincidencia entre las expectativas del usuario y las acciones reales ejecutadas por los dispositivos inteligentes. Estas señales, denominadas potenciales de error (ErrPs), se consideran una forma natural de intercomunicar nuestro cerebro con máquinas y, por lo tanto, los usuarios solo requieren monitorizar las acciones de un dispositivo y evaluar mentalmente si este último se comporta correctamente o no. Esto puede verse como una forma de supervisar el comportamiento del dispositivo, en el que la decodificación de estas evaluaciones mentales se utiliza para proporcionar a estos dispositivos retroalimentación directamente relacionada con la ejecución de una tarea determinada para que puedan aprender y adaptarse a las preferencias del usuario.Dado que la respuesta neuronal de ErrP está asociada a un evento exógeno (dispositivo que comete una acción errónea), la mayoría de los trabajos desarrollados han intentado distinguir si una acción es correcta o errónea mediante la explotación de eventos discretos en escenarios bien controlados. Esta tesis presenta el primer intento de cambiar hacia configuraciones asíncronas que se centran en tareas relacionadas con el aumento de las capacidades motoras, con el objetivo de desarrollar interfaces para usuarios con movilidad limitada. En este tipo de configuraciones, dos desafíos importantes son que los eventos correctos o erróneos no están claramente definidos y los usuarios tienen que evaluar continuamente la tarea ejecutada, mientras que la clasificación de las señales EEG debe realizarse de forma asíncrona. Como resultado, los decodificadores tienen que lidiar constantemente con la actividad EEG de fondo, que típicamente conduce a una gran cantidad de errores de detección de firmas de error. Para superar estos desafíos, esta tesis aborda dos líneas principales de trabajo.Primero, explora la neurofisiología de las señales neuronales evocadas asociadas con la percepción de errores durante el uso interactivo de un BMI en escenarios continuos y más realistas.Se realizaron dos estudios para encontrar características alternativas basadas en el dominio de la frecuencia como una forma de lidiar con la alta variabilidad de las señales del EEG. Resultados, revelaron que existe un patrón estable representado como oscilaciones "theta" que mejoran la generalización durante la clasificación. Además, se utilizaron técnicas de aprendizaje automático de última generación para aplicar el aprendizaje de transferencia para discriminar asincrónicamente los errores cuando se introdujeron de forma gradual y no se conoce presumiblemente el inicio que desencadena los ErrPs. Además, los análisis de neurofisiología arrojan algo de luz sobre los mecanismos cognitivos subyacentes que provocan ErrP durante las tareas continuas, lo que sugiere la existencia de modelos neuronales en nuestro cerebro que acumulan evidencia y solo toman una decisión al alcanzar un cierto umbral. En segundo lugar, esta tesis evalúa la implementación de estos potenciales relacionados con errores en tres aplicaciones orientadas al usuario. Estos estudios no solo exploran cómo maximizar el rendimiento de decodificación de las firmas ErrP, sino que también investigan los mecanismos neuronales subyacentes y cómo los diferentes factores afectan las señales provocadas.La primera aplicación de esta tesis presenta una nueva forma de guiar a un robot móvil que se mueve en un entorno continuo utilizando solo potenciales de error como retroalimentación que podrían usarse para el control directo de dispositivos de asistencia. Con este propósito, proponemos un algoritmo basado en el emparejamiento de políticas para el aprendizaje de refuerzo inverso para inferir el objetivo del usuario a partir de señales cerebrales.La segunda aplicación presentada en esta tesis contempla los primeros pasos hacia un BCI híbrido para ejecutar distintos tipos de agarre de objetos, con el objetivo de ayudar a las personas que han perdido la funcionalidad motora de su extremidad superior. Este BMI combina la decodificación del tipo de agarre a partir de señales de EEG obtenidas del espectro de baja frecuencia con los potenciales de error provocados como resultado de la monitorización de movimientos de agarre erróneos. Los resultados muestran que, en efecto los ErrP aparecen en combinaciones de señales motoras originadas a partir de movimientos de agarre consistentes en una única repetición. Además, la evaluación de los diferentes factores involucrados en el diseño de la interfaz híbrida (como la velocidad de los estímulos, el tipo de agarre o la tarea mental) muestra cómo dichos factores afectan la morfología del subsiguiente potencial de error evocado.La tercera aplicación investiga los correlatos neuronales y los procesos cognitivos subyacentes asociados con desajustes somatosensoriales producidos por perturbaciones inesperadas durante la estimulación eléctrica neuromuscular en el brazo de un usuario. Este estudio simula los posibles errores que ocurren durante la terapia de neuro-rehabilitación, en la que la activación simultánea de la estimulación aferente mientras los sujetos se concentran en la realización de una tarea motora es crucial para una recuperación óptima. Los resultados muestran que los errores pueden aumentar la atención del sujeto en la tarea y desencadenar mecanismos de aprendizaje que al mismo tiempo podrían promover la neuroplasticidad motora.En resumen, a lo largo de esta tesis, se han diseñado varios paradigmas experimentales para mejorar la comprensión de cómo se generan los potenciales relacionados con errores durante el uso interactivo de BMI en aplicaciones orientadas al usuario. Se han propuesto diferentes métodos para pasar de la configuración bloqueada en el tiempo a la asíncrona, tanto en términos de decodificación como de percepción de los eventos erróneos; y ha explorado tres aplicaciones relacionadas con el aumento de las capacidades motoras, en las cuales los ErrPs se pueden usar para el control de dispositivos, la sustitución de motores y la neuro-rehabilitación.Brain-machine interfaces (BMIs) allow the decoding of cortical activation patterns from the users brain to provide people with severely limited mobility, due to an accident or disease, a way to establish a direct connection between their brain and a device. In this sense, BMIs based in noninvasive recordings, such as the electroencephalogram (EEG) have o↵ered these users new opportunities to regain control over activities of their daily life that they could not perform otherwise, especially in the areas of communication and control of their environment. Over the past years and with the latest technological advancements, devices have significantly grown on complexity expanding the number of possibilities to control complex robotic devices, prosthesis with numerous degrees of freedom or even to apply compound patterns of electrical stimulation on the subjects own paralyzed extremities to execute precise movements. However, the band-with of communication between brain and devices is still very limited, both in terms of the number and the speed at which neural commands can be decoded, and thus solely relying on neural signals do not guarantee accurate control them. In order to benefit of these technologies, the field of BMIs adopted the well-known approach of shared-control. This strategy intends to create a cooperation system between the user and an intelligent device, liberating the user from the burdensome parts of the task without losing the feeling of being in control. Here, users only need to focus their attention on high-level commands (e.g. choose the final destination to reach, or a specific item to grab) while the intelligent agent resolve low-level problems (e.g. trajectory planning, obstacle avoidance, etc) to perform the designated task in the optimal way. In particular, this thesis revolves around a high-level cognitive neural signal originated as the mismatch between the expectations of the user and the actual actions executed by the intelligent devices. These signals, denoted as error-related potentials (ErrPs), are thought as a natural way to intercommunicate our brain with machines and thus users only require to monitor the actions of a device and mentally assess whether the latter is behaving correctly or not. This can be seen as a way to supervise the device’s behavior, in which the decoding of these mental assessments is used to provide these devices with feedback directly related with the performance of a given task so they can learn and adapt to the user’s preferences. Since the ErrP’s neural response is associated to an exogenous event (device committing an erroneous action), most of the developed works have attempted to distinguish whether an action is correct or erroneous by exploiting discrete events under well-controlled scenarios. This thesis presents the first attempt to shift towards asynchronous settings that focus on tasks related with the augmentation of motor capabilities, with the objective of developing interfaces for users with limited mobility. In this type of setups, two important challenges are that correct or erroneous events are not clearly defined and users have to continuously evaluate the executed task, while classification of EEG signals has to be performed asynchronously. As a result, the decoders have to constantly deal with background EEG activity, which typically leads to a large number of missdetection of error signatures. To overcome these challenges, this thesis addresses two main lines of work. First, it explores the neurophysiology of the evoked neural signatures associated with the perception of errors during the interactive use of a BMI in continuous and more realistic scenarios. Two studies were performed to find alternative features based on the frequency domain as a way of dealing with the high variability of EEG signals. Results, revealed that there exists a stable pattern represented as theta oscillations that enhance generalization during classification. Also, state-of-the-art machine learning techniques were used to apply transfer learning to asynchronously discriminate errors when they were introduced in a gradual fashion and the onset that triggers the ErrPs is not presumably known. Furthermore, neurophsysiology analyses shed some light about the underlying cognitive mechanisms that elicit ErrP during continuous tasks, suggesting the existence of neural models in our brain that accumulate evidence and only take a decision upon reaching a certain threshold. Secondly, this thesis evaluates the implementation of these error-related potentials in three user-oriented applications. These studies not only explore how to maximize the decoding performance of ErrP signatures but also investigate the underlying neural mechanisms and how di↵erent factors a↵ect the elicited signals. The first application of this thesis presents a new way to guide a mobile robot moving in a continuous environment using only error potentials as feedback which could be used for the direct control of assistive devices. With this purpose, we propose an algorithm based on policy matching for inverse reinforcement learning to infer the user goal from brain signals. The second application presented in this thesis contemplates the first steps towards a hybrid BMI for grasping oriented to assist people who have lost motor functionality of their upper-limb. This BMI combines the decoding of the type of grasp from low-frequency EEG signals with error-related potentials elicited as the result of monitoring an erroneous grasping. The results show that ErrPs are elicited in combination of motor signatures from the low-frequency spectrum originated from single repetition grasping tasks and evaluates how di↵erent design factors (such as the speed of the stimuli, type of grasp or mental task) impact the morphology of the subsequent evoked ErrP. The third application investigates the neural correlates and the underlying cognitive processes associated with somatosensory mismatches produced by unexpected disturbances during neuromsucular electrical stimulation on a user’s arm. This study simulates possible errors that occur during neurorehabilitation therapy, in which the simultaneous activation of a↵erent stimulation while the subjects are concentrated in performing a motor task is crucial for optimal recovery. The results showed that errors may increase subject’s attention on the task and trigger learning mechanisms that at the same time could promote motor neuroplasticity. In summary, throughout this thesis, several experimental paradigms have been designed to improve the understanding of how error-related potentials are generated during the interactive use of BMIs in user-oriented applications. Di↵erent methods have been proposed to shift from time-locked to asynchronous settings, both in terms of decoding and perception of the erroneous events; and it has explored three applications related with the augmentation of motor capabilities, in which ErrPs can be used for control of devices, motor substitution and neurorehabilitation.<br /

    Co-adaptive control strategies in assistive Brain-Machine Interfaces

    Get PDF
    A large number of people with severe motor disabilities cannot access any of the available control inputs of current assistive products, which typically rely on residual motor functions. These patients are therefore unable to fully benefit from existent assistive technologies, including communication interfaces and assistive robotics. In this context, electroencephalography-based Brain-Machine Interfaces (BMIs) offer a potential non-invasive solution to exploit a non-muscular channel for communication and control of assistive robotic devices, such as a wheelchair, a telepresence robot, or a neuroprosthesis. Still, non-invasive BMIs currently suffer from limitations, such as lack of precision, robustness and comfort, which prevent their practical implementation in assistive technologies. The goal of this PhD research is to produce scientific and technical developments to advance the state of the art of assistive interfaces and service robotics based on BMI paradigms. Two main research paths to the design of effective control strategies were considered in this project. The first one is the design of hybrid systems, based on the combination of the BMI together with gaze control, which is a long-lasting motor function in many paralyzed patients. Such approach allows to increase the degrees of freedom available for the control. The second approach consists in the inclusion of adaptive techniques into the BMI design. This allows to transform robotic tools and devices into active assistants able to co-evolve with the user, and learn new rules of behavior to solve tasks, rather than passively executing external commands. Following these strategies, the contributions of this work can be categorized based on the typology of mental signal exploited for the control. These include: 1) the use of active signals for the development and implementation of hybrid eyetracking and BMI control policies, for both communication and control of robotic systems; 2) the exploitation of passive mental processes to increase the adaptability of an autonomous controller to the user\u2019s intention and psychophysiological state, in a reinforcement learning framework; 3) the integration of brain active and passive control signals, to achieve adaptation within the BMI architecture at the level of feature extraction and classification

    Eye-gaze interaction techniques for use in online games and environments for users with severe physical disabilities.

    Get PDF
    Multi-User Virtual Environments (MUVEs) and Massively Multi-player On- line Games (MMOGs) are a popular, immersive genre of computer game. For some disabled users, eye-gaze offers the only input modality with the potential for sufficiently high bandwidth to support the range of time-critical interaction tasks required to play. Although, there has been much research into gaze interaction techniques for computer interaction over the past twenty years, much of this has focused on 2D desktop application control. There has been some work that investigates the use of gaze interaction as an additional input device for gaming but very little on using gaze on its own. Further, configuration of these techniques usually requires expert knowledge often beyond the capabilities of a parent, carer or support worker. The work presented in this thesis addresses these issues by the investigation of novel gaze-only interaction techniques. These are to enable at least a beginner level of game play to take place together with a means of adapting the techniques to suit an individual. To achieve this, a collection of novel gaze based interaction techniques have been evaluated through empirical studies. These have been encompassed within an extensible software architecture that has been made available for free download. Further, a metric of reliability is developed that when used as a measure within a specially designed diagnostic test, allows the interaction technique to be adapted to suit an individual. Methods of selecting interaction techniques based upon game task are also explored and a novel methodology based on expert task analysis is developed to aid selection

    Heterogeneous recognition of bioacoustic signals for human-machine interfaces

    No full text
    Human-machine interfaces (HMI) provide a communication pathway between man and machine. Not only do they augment existing pathways, they can substitute or even bypass these pathways where functional motor loss prevents the use of standard interfaces. This is especially important for individuals who rely on assistive technology in their everyday life. By utilising bioacoustic activity, it can lead to an assistive HMI concept which is unobtrusive, minimally disruptive and cosmetically appealing to the user. However, due to the complexity of the signals it remains relatively underexplored in the HMI field. This thesis investigates extracting and decoding volition from bioacoustic activity with the aim of generating real-time commands. The developed framework is a systemisation of various processing blocks enabling the mapping of continuous signals into M discrete classes. Class independent extraction efficiently detects and segments the continuous signals while class-specific extraction exemplifies each pattern set using a novel template creation process stable to permutations of the data set. These templates are utilised by a generalised single channel discrimination model, whereby each signal is template aligned prior to classification. The real-time decoding subsystem uses a multichannel heterogeneous ensemble architecture which fuses the output from a diverse set of these individual discrimination models. This enhances the classification performance by elevating both the sensitivity and specificity, with the increased specificity due to a natural rejection capacity based on a non-parametric majority vote. Such a strategy is useful when analysing signals which have diverse characteristics, false positives are prevalent and have strong consequences, and when there is limited training data available. The framework has been developed with generality in mind with wide applicability to a broad spectrum of biosignals. The processing system has been demonstrated on real-time decoding of tongue-movement ear pressure signals using both single and dual channel setups. This has included in-depth evaluation of these methods in both offline and online scenarios. During online evaluation, a stimulus based test methodology was devised, while representative interference was used to contaminate the decoding process in a relevant and real fashion. The results of this research provide a strong case for the utility of such techniques in real world applications of human-machine communication using impulsive bioacoustic signals and biosignals in general
    corecore