99 research outputs found

    Closed-Loop, Open-Source Electrophysiology

    Get PDF
    Multiple extracellular microelectrodes (multi-electrode arrays, or MEAs) effectively record rapidly varying neural signals, and can also be used for electrical stimulation. Multi-electrode recording can serve as artificial output (efferents) from a neural system, while complex spatially and temporally targeted stimulation can serve as artificial input (afferents) to the neuronal network. Multi-unit or local field potential (LFP) recordings can not only be used to control real world artifacts, such as prostheses, computers or robots, but can also trigger or alter subsequent stimulation. Real-time feedback stimulation may serve to modulate or normalize aberrant neural activity, to induce plasticity, or to serve as artificial sensory input. Despite promising closed-loop applications, commercial electrophysiology systems do not yet take advantage of the bidirectional capabilities of multi-electrodes, especially for use in freely moving animals. We addressed this lack of tools for closing the loop with NeuroRighter, an open-source system including recording hardware, stimulation hardware, and control software with a graphical user interface. The integrated system is capable of multi-electrode recording and simultaneous patterned microstimulation (triggered by recordings) with minimal stimulation artifact. The potential applications of closed-loop systems as research tools and clinical treatments are broad; we provide one example where epileptic activity recorded by a multi-electrode probe is used to trigger targeted stimulation, via that probe, to freely moving rodents

    Neuroengineering Tools/Applications for Bidirectional Interfaces, Brain–Computer Interfaces, and Neuroprosthetic Implants – A Review of Recent Progress

    Get PDF
    The main focus of this review is to provide a holistic amalgamated overview of the most recent human in vivo techniques for implementing brain–computer interfaces (BCIs), bidirectional interfaces, and neuroprosthetics. Neuroengineering is providing new methods for tackling current difficulties; however neuroprosthetics have been studied for decades. Recent progresses are permitting the design of better systems with higher accuracies, repeatability, and system robustness. Bidirectional interfaces integrate recording and the relaying of information from and to the brain for the development of BCIs. The concepts of non-invasive and invasive recording of brain activity are introduced. This includes classical and innovative techniques like electroencephalography and near-infrared spectroscopy. Then the problem of gliosis and solutions for (semi-) permanent implant biocompatibility such as innovative implant coatings, materials, and shapes are discussed. Implant power and the transmission of their data through implanted pulse generators and wireless telemetry are taken into account. How sensation can be relayed back to the brain to increase integration of the neuroengineered systems with the body by methods such as micro-stimulation and transcranial magnetic stimulation are then addressed. The neuroprosthetic section discusses some of the various types and how they operate. Visual prosthetics are discussed and the three types, dependant on implant location, are examined. Auditory prosthetics, being cochlear or cortical, are then addressed. Replacement hand and limb prosthetics are then considered. These are followed by sections concentrating on the control of wheelchairs, computers and robotics directly from brain activity as recorded by non-invasive and invasive techniques

    Novel Bidirectional Body - Machine Interface to Control Upper Limb Prosthesis

    Get PDF
    Objective. The journey of a bionic prosthetic user is characterized by the opportunities and limitations involved in adopting a device (the prosthesis) that should enable activities of daily living (ADL). Within this context, experiencing a bionic hand as a functional (and, possibly, embodied) limb constitutes the premise for mitigating the risk of its abandonment through the continuous use of the device. To achieve such a result, different aspects must be considered for making the artificial limb an effective support for carrying out ADLs. Among them, intuitive and robust control is fundamental to improving amputees’ quality of life using upper limb prostheses. Still, as artificial proprioception is essential to perceive the prosthesis movement without constant visual attention, a good control framework may not be enough to restore practical functionality to the limb. To overcome this, bidirectional communication between the user and the prosthesis has been recently introduced and is a requirement of utmost importance in developing prosthetic hands. Indeed, closing the control loop between the user and a prosthesis by providing artificial sensory feedback is a fundamental step towards the complete restoration of the lost sensory-motor functions. Within my PhD work, I proposed the development of a more controllable and sensitive human-like hand prosthesis, i.e., the Hannes prosthetic hand, to improve its usability and effectiveness. Approach. To achieve the objectives of this thesis work, I developed a modular and scalable software and firmware architecture to control the Hannes prosthetic multi-Degree of Freedom (DoF) system and to fit all users’ needs (hand aperture, wrist rotation, and wrist flexion in different combinations). On top of this, I developed several Pattern Recognition (PR) algorithms to translate electromyographic (EMG) activity into complex movements. However, stability and repeatability were still unmet requirements in multi-DoF upper limb systems; hence, I started by investigating different strategies to produce a more robust control. To do this, EMG signals were collected from trans-radial amputees using an array of up to six sensors placed over the skin. Secondly, I developed a vibrotactile system to implement haptic feedback to restore proprioception and create a bidirectional connection between the user and the prosthesis. Similarly, I implemented an object stiffness detection to restore tactile sensation able to connect the user with the external word. This closed-loop control between EMG and vibration feedback is essential to implementing a Bidirectional Body - Machine Interface to impact amputees’ daily life strongly. For each of these three activities: (i) implementation of robust pattern recognition control algorithms, (ii) restoration of proprioception, and (iii) restoration of the feeling of the grasped object's stiffness, I performed a study where data from healthy subjects and amputees was collected, in order to demonstrate the efficacy and usability of my implementations. In each study, I evaluated both the algorithms and the subjects’ ability to use the prosthesis by means of the F1Score parameter (offline) and the Target Achievement Control test-TAC (online). With this test, I analyzed the error rate, path efficiency, and time efficiency in completing different tasks. Main results. Among the several tested methods for Pattern Recognition, the Non-Linear Logistic Regression (NLR) resulted to be the best algorithm in terms of F1Score (99%, robustness), whereas the minimum number of electrodes needed for its functioning was determined to be 4 in the conducted offline analyses. Further, I demonstrated that its low computational burden allowed its implementation and integration on a microcontroller running at a sampling frequency of 300Hz (efficiency). Finally, the online implementation allowed the subject to simultaneously control the Hannes prosthesis DoFs, in a bioinspired and human-like way. In addition, I performed further tests with the same NLR-based control by endowing it with closed-loop proprioceptive feedback. In this scenario, the results achieved during the TAC test obtained an error rate of 15% and a path efficiency of 60% in experiments where no sources of information were available (no visual and no audio feedback). Such results demonstrated an improvement in the controllability of the system with an impact on user experience. Significance. The obtained results confirmed the hypothesis of improving robustness and efficiency of a prosthetic control thanks to of the implemented closed-loop approach. The bidirectional communication between the user and the prosthesis is capable to restore the loss of sensory functionality, with promising implications on direct translation in the clinical practice

    Future developments in brain-machine interface research

    Get PDF
    Neuroprosthetic devices based on brain-machine interface technology hold promise for the restoration of body mobility in patients suffering from devastating motor deficits caused by brain injury, neurologic diseases and limb loss. During the last decade, considerable progress has been achieved in this multidisciplinary research, mainly in the brain-machine interface that enacts upper-limb functionality. However, a considerable number of problems need to be resolved before fully functional limb neuroprostheses can be built. To move towards developing neuroprosthetic devices for humans, brain-machine interface research has to address a number of issues related to improving the quality of neuronal recordings, achieving stable, long-term performance, and extending the brain-machine interface approach to a broad range of motor and sensory functions. Here, we review the future steps that are part of the strategic plan of the Duke University Center for Neuroengineering, and its partners, the Brazilian National Institute of Brain-Machine Interfaces and the École Polytechnique Fédérale de Lausanne (EPFL) Center for Neuroprosthetics, to bring this new technology to clinical fruition

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    Egocentric Computer Vision and Machine Learning for Simulated Prosthetic Vision

    Get PDF
    Las prótesis visuales actuales son capaces de proporcionar percepción visual a personas con cierta ceguera. Sin pasar por la parte dañada del camino visual, la estimulación eléctrica en la retina o en el sistema nervioso provoca percepciones puntuales conocidas como “fosfenos”. Debido a limitaciones fisiológicas y tecnológicas, la información que reciben los pacientes tiene una resolución muy baja y un campo de visión y rango dinámico reducido afectando seriamente la capacidad de la persona para reconocer y navegar en entornos desconocidos. En este contexto, la inclusión de nuevas técnicas de visión por computador es un tema clave activo y abierto. En esta tesis nos centramos especialmente en el problema de desarrollar técnicas para potenciar la información visual que recibe el paciente implantado y proponemos diferentes sistemas de visión protésica simulada para la experimentación.Primero, hemos combinado la salida de dos redes neuronales convolucionales para detectar bordes informativos estructurales y siluetas de objetos. Demostramos cómo se pueden reconocer rápidamente diferentes escenas y objetos incluso en las condiciones restringidas de la visión protésica. Nuestro método es muy adecuado para la comprensión de escenas de interiores comparado con los métodos tradicionales de procesamiento de imágenes utilizados en prótesis visuales.Segundo, presentamos un nuevo sistema de realidad virtual para entornos de visión protésica simulada más realistas usando escenas panorámicas, lo que nos permite estudiar sistemáticamente el rendimiento de la búsqueda y reconocimiento de objetos. Las escenas panorámicas permiten que los sujetos se sientan inmersos en la escena al percibir la escena completa (360 grados).En la tercera contribución demostramos cómo un sistema de navegación de realidad aumentada para visión protésica ayuda al rendimiento de la navegación al reducir el tiempo y la distancia para alcanzar los objetivos, incluso reduciendo significativamente el número de colisiones de obstáculos. Mediante el uso de un algoritmo de planificación de ruta, el sistema encamina al sujeto a través de una ruta más corta y sin obstáculos. Este trabajo está actualmente bajo revisión.En la cuarta contribución, evaluamos la agudeza visual midiendo la influencia del campo de visión con respecto a la resolución espacial en prótesis visuales a través de una pantalla montada en la cabeza. Para ello, usamos la visión protésica simulada en un entorno de realidad virtual para simular la experiencia de la vida real al usar una prótesis de retina. Este trabajo está actualmente bajo revisión.Finalmente, proponemos un modelo de Spiking Neural Network (SNN) que se basa en mecanismos biológicamente plausibles y utiliza un esquema de aprendizaje no supervisado para obtener mejores algoritmos computacionales y mejorar el rendimiento de las prótesis visuales actuales. El modelo SNN propuesto puede hacer uso de la señal de muestreo descendente de la unidad de procesamiento de información de las prótesis retinianas sin pasar por el análisis de imágenes retinianas, proporcionando información útil a los ciegos. Esté trabajo está actualmente en preparación.<br /

    Deep learning and feature engineering techniques applied to the myoelectric signal for accurate prediction of movements

    Get PDF
    Técnicas de reconhecimento de padrões no Sinal Mioelétrico (EMG) são empregadas no desenvolvimento de próteses robóticas, e para isso, adotam diversas abordagens de Inteligência Artificial (IA). Esta Tese se propõe a resolver o problema de reconhecimento de padrões EMG através da adoção de técnicas de aprendizado profundo de forma otimizada. Para isso, desenvolveu uma abordagem que realiza a extração da característica a priori, para alimentar os classificadores que supostamente não necessitam dessa etapa. O estudo integrou a plataforma BioPatRec (estudo e desenvolvimento avançado de próteses) a dois algoritmos de classificação (Convolutional Neural Network e Long Short-Term Memory) de forma híbrida, onde a entrada fornecida à rede já possui características que descrevem o movimento (nível de ativação muscular, magnitude, amplitude, potência e outros). Assim, o sinal é rastreado como uma série temporal ao invés de uma imagem, o que nos permite eliminar um conjunto de pontos irrelevantes para o classificador, tornando a informação expressivas. Na sequência, a metodologia desenvolveu um software que implementa o conceito introduzido utilizando uma Unidade de Processamento Gráfico (GPU) de modo paralelo, esse incremento permitiu que o modelo de classificação aliasse alta precisão com um tempo de treinamento inferior a 1 segundo. O modelo paralelizado foi chamado de BioPatRec-Py e empregou algumas técnicas de Engenharia de Features que conseguiram tornar a entrada da rede mais homogênea, reduzindo a variabilidade, o ruído e uniformizando a distribuição. A pesquisa obteve resultados satisfatórios e superou os demais algoritmos de classificação na maioria dos experimentos avaliados. O trabalho também realizou uma análise estatística dos resultados e fez o ajuste fino dos hiper-parâmetros de cada uma das redes. Em última instancia, o BioPatRec-Py forneceu um modelo genérico. A rede foi treinada globalmente entre os indivíduos, permitindo a criação de uma abordagem global, com uma precisão média de 97,83%.Pattern recognition techniques in the Myoelectric Signal (EMG) are employed in the development of robotic prostheses, and for that, they adopt several approaches of Artificial Intelligence (AI). This Thesis proposes to solve the problem of recognition of EMG standards through the adoption of profound learning techniques in an optimized way. The research developed an approach that extracts the characteristic a priori to feed the classifiers that supposedly do not need this step. The study integrated the BioPatRec platform (advanced prosthesis study and development) to two classification algorithms (Convolutional Neural Network and Long Short-Term Memory) in a hybrid way, where the input provided to the network already has characteristics that describe the movement (level of muscle activation, magnitude, amplitude, power, and others). Thus, the signal is tracked as a time series instead of an image, which allows us to eliminate a set of points irrelevant to the classifier, making the information expressive. In the sequence, the methodology developed software that implements the concept introduced using a Graphical Processing Unit (GPU) in parallel this increment allowed the classification model to combine high precision with a training time of less than 1 second. The parallel model was called BioPatRec-Py and employed some Engineering techniques of Features that managed to make the network entry more homogeneous, reducing variability, noise, and standardizing distribution. The research obtained satisfactory results and surpassed the other classification algorithms in most of the evaluated experiments. The work performed a statistical analysis of the outcomes and fine-tuned the hyperparameters of each of the networks. Ultimately, BioPatRec-Py provided a generic model. The network was trained globally between individuals, allowing the creation of a standardized approach, with an average accuracy of 97.83%

    Information transmission in normal vision and optogenetically resensitised dystrophic retinas

    Get PDF
    Phd ThesisThe retina is a sophisticated image processing machine, transforming the visual scene as detected by the photoreceptors into a pattern of action potentials that is sent to the brain by the retinal ganglion cells (RGCs), where it is further processed to help us understand and navigate the world. Understanding this encoding process is important on a number of levels. First, it informs the study of upstream visual processing by elucidating the signals higher visual areas receive as input and how they relate to the outside world. Second, it is important for the development of treatments for retinal blindness, such as retinal prosthetics. In this thesis, I present work using multielectrode array (MEA) recordings of RGC populations from ex-vivo retinal wholemounts to study various aspects of retinal information processing. My results fall into two main themes. In the rst part, in collaboration with Dr Geo rey Portelli and Dr Pierre Kornprobst of INRIA, I use ashed gratings of varying spatial frequency and phase to compare di erent coding strategies that the retina might use. These results show that information is encoded synergistically by pairs of neurons and that, of the codes tested, a Rank Order Code based on the relative order of ring of the rst spikes of a population of neurons following a stimulus provides information about the stimulus faster and more e ciently than other codes. In the later parts, I use optogenetic stimulation of RGCs in congenitally blind retinas to study how visual information is corrupted by the spontaneous hyperactivity that arises as a result of photoreceptor degeneration. I show that by dampening this activity with the gap junction blocker meclofenamic acid, I can improve the signal-to-noise ratio, spatial acuity and contrast sensitivity of prosthetically evoked responses. Taken together, this work provides important insights for the future development of retinal prostheses
    • …
    corecore