57 research outputs found

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    Putting artificial intelligence into wearable human-machine interfaces – towards a generic, self-improving controller

    Get PDF
    The standard approach to creating a machine learning based controller is to provide users with a number of gestures that they need to make; record multiple instances of each gesture using specific sensors; extract the relevant sensor data and pass it through a supervised learning algorithm until the algorithm can successfully identify the gestures; map each gesture to a control signal that performs a desired outcome. This approach is both inflexible and time consuming. The primary contribution of this research was to investigate a new approach to putting artificial intelligence into wearable human-machine interfaces by creating a Generic, Self-Improving Controller. It was shown to learn two user-defined static gestures with an accuracy of 100% in less than 10 samples per gesture; three in less than 20 samples per gesture; and four in less than 35 samples per gesture. Pre-defined dynamic gestures were more difficult to learn. It learnt two with an accuracy of 90% in less than 6,000 samples per gesture; and four with an accuracy of 70% after 50,000 samples per gesture. The research has resulted in a number of additional contributions: • The creation of a source-independent hardware data capture, processing, fusion and storage tool for standardising the capture and storage of historical copies of data captured from multiple different sensors. • An improved Attitude and Heading Reference System (AHRS) algorithm for calculating orientation quaternions that is five orders of magnitude more precise. • The reformulation of the regularised TD learning algorithm; the reformulation of the TD learning algorithm applied the artificial neural network back-propagation algorithm; and the combination of the reformulations into a new, regularised TD learning algorithm applied to the artificial neural network back-propagation algorithm. • The creation of a Generic, Self-Improving Predictor that can use different learning algorithms and a Flexible Artificial Neural Network.Open Acces

    Low-Cost Sensors and Biological Signals

    Get PDF
    Many sensors are currently available at prices lower than USD 100 and cover a wide range of biological signals: motion, muscle activity, heart rate, etc. Such low-cost sensors have metrological features allowing them to be used in everyday life and clinical applications, where gold-standard material is both too expensive and time-consuming to be used. The selected papers present current applications of low-cost sensors in domains such as physiotherapy, rehabilitation, and affective technologies. The results cover various aspects of low-cost sensor technology from hardware design to software optimization

    Generalized Linear Quaternion Complementary Filter for Attitude Estimation from Multi-Sensor Observations: An Optimization Approach

    Get PDF
    International audienceFocusing on generalized sensor combinations, this paper deals with attitude estimation problem using a linear complementary filter. The quaternion observation model is obtained via a gradient descent algorithm (GDA). An additive measurement model is then established according to derived results. The filter is named as the generalized complementary filter (GCF) where the observation model is simplified to its limit as a linear one that is quite different from previous-reported brute-force computation results. Moreover, we prove that representative derivative-based optimization algorithms are essentially equivalent to each other. Derivations are given to establish the state model based on the quaternion kinematic equation. The proposed algorithm is validated under several experimental conditions involving free-living environment, harsh external field disturbances and aerial flight test aided by robotic vision. Using the specially designed experimental devices, data acquisition and algorithm computations are performed to give comparisons on accuracy, robustness, time-consumption and etc. with representative methods. The results show that not only the proposed filter can give fast, accurate and stable estimates in terms of various sensor combinations, but it also produces robust attitude estimation in the presence of harsh situations e.g. irregular magnetic distortion. Note to Practitioners-Multi-sensor attitude estimation is a crucial technique in robotic devices. Many existing methods focus on the orientation fusion of specific sensor combinations. In this paper we make the problem more abstract. The results given in this paper are very general and can significantly decrease the space consumption and computation burden without losing the original estimation accuracy. Such performance will be of benefit to robotic platforms requiring flexible and easy-to-tune attitude estimation in the future

    Design and implementation of resilient attitude estimation algorithms for aerospace applications

    Get PDF
    Satellite attitude estimation is a critical component of satellite attitude determination and control systems, relying on highly accurate sensors such as IMUs, star trackers, and sun sensors. However, the complex space environment can cause sensor performance degradation or even failure. To address this issue, FDIR systems are necessary. This thesis presents a novel approach to satellite attitude estimation that utilizes an InertialNavigation System (INS) to achieve high accuracy with the low computational load. The algorithm is based on a two-layer Kalman filter, which incorporates the quaternion estimator(QUEST) algorithm, FQA, Linear interpolation (LERP)algorithms, and KF. Moreover, the thesis proposes an FDIR system for the INS that can detect and isolate faults and recover the system safely. This system includes two-layer fault detection with isolation and two-layered recovery, which utilizes an Adaptive Unscented Kalman Filter (AUKF), QUEST algorithm, residual generators, Radial Basis Function (RBF) neural networks, and an adaptive complementary filter (ACF). These two fault detection layers aim to isolate and identify faults while decreasing the rate of false alarms. An FPGA-based FDIR system is also designed and implemented to reduce latency while maintaining normal resource consumption in this thesis. Finally, a Fault Tolerance Federated Kalman Filter (FTFKF) is proposed to fuse the output from INS and the CNS to achieve high precision and robust attitude estimation.The findings of this study provide a solid foundation for the development of FDIR systems for various applications such as robotics, autonomous vehicles, and unmanned aerial vehicles, particularly for satellite attitude estimation. The proposed INS-based approach with the FDIR system has demonstrated high accuracy, fault tolerance, and low computational load, making it a promising solution for satellite attitude estimation in harsh space environment

    Une méthode de mesure du mouvement humain pour la programmation par démonstration

    Full text link
    Programming by demonstration (PbD) is an intuitive approach to impart a task to a robot from one or several demonstrations by the human teacher. The acquisition of the demonstrations involves the solution of the correspondence problem when the teacher and the learner differ in sensing and actuation. Kinesthetic guidance is widely used to perform demonstrations. With such a method, the robot is manipulated by the teacher and the demonstrations are recorded by the robot's encoders. In this way, the correspondence problem is trivial but the teacher dexterity is afflicted which may impact the PbD process. Methods that are more practical for the teacher usually require the identification of some mappings to solve the correspondence problem. The demonstration acquisition method is based on a compromise between the difficulty of identifying these mappings, the level of accuracy of the recorded elements and the user-friendliness and convenience for the teacher. This thesis proposes an inertial human motion tracking method based on inertial measurement units (IMUs) for PbD for pick-and-place tasks. Compared to kinesthetic guidance, IMUs are convenient and easy to use but can present a limited accuracy. Their potential for PbD applications is investigated. To estimate the trajectory of the teacher's hand, 3 IMUs are placed on her/his arm segments (arm, forearm and hand) to estimate their orientations. A specific method is proposed to partially compensate the well-known drift of the sensor orientation estimation around the gravity direction by exploiting the particular configuration of the demonstration. This method, called heading reset, is based on the assumption that the sensor passes through its original heading with stationary phases several times during the demonstration. The heading reset is implemented in an integration and vector observation algorithm. Several experiments illustrate the advantages of this heading reset. A comprehensive inertial human hand motion tracking (IHMT) method for PbD is then developed. It includes an initialization procedure to estimate the orientation of each sensor with respect to the human arm segment and the initial orientation of the sensor with respect to the teacher attached frame. The procedure involves a rotation and a static position of the extended arm. The measurement system is thus robust with respect to the positioning of the sensors on the segments. A procedure for estimating the position of the human teacher relative to the robot and a calibration procedure for the parameters of the method are also proposed. At the end, the error of the human hand trajectory is measured experimentally and is found in an interval between 28.528.5 mm and 61.861.8 mm. The mappings to solve the correspondence problem are identified. Unfortunately, the observed level of accuracy of this IHMT method is not sufficient for a PbD process. In order to reach the necessary level of accuracy, a method is proposed to correct the hand trajectory obtained by IHMT using vision data. A vision system presents a certain complementarity with inertial sensors. For the sake of simplicity and robustness, the vision system only tracks the objects but not the teacher. The correction is based on so-called Positions Of Interest (POIs) and involves 3 steps: the identification of the POIs in the inertial and vision data, the pairing of the hand POIs to objects POIs that correspond to the same action in the task, and finally, the correction of the hand trajectory based on the pairs of POIs. The complete method for demonstration acquisition is experimentally evaluated in a full PbD process. This experiment reveals the advantages of the proposed method over kinesthesy in the context of this work.La programmation par démonstration est une approche intuitive permettant de transmettre une tâche à un robot à partir d'une ou plusieurs démonstrations faites par un enseignant humain. L'acquisition des démonstrations nécessite cependant la résolution d'un problème de correspondance quand les systèmes sensitifs et moteurs de l'enseignant et de l'apprenant diffèrent. De nombreux travaux utilisent des démonstrations faites par kinesthésie, i.e., l'enseignant manipule directement le robot pour lui faire faire la tâche. Ce dernier enregistre ses mouvements grâce à ses propres encodeurs. De cette façon, le problème de correspondance est trivial. Lors de telles démonstrations, la dextérité de l'enseignant peut être altérée et impacter tout le processus de programmation par démonstration. Les méthodes d'acquisition de démonstration moins invalidantes pour l'enseignant nécessitent souvent des procédures spécifiques pour résoudre le problème de correspondance. Ainsi l'acquisition des démonstrations se base sur un compromis entre complexité de ces procédures, le niveau de précision des éléments enregistrés et la commodité pour l'enseignant. Cette thèse propose ainsi une méthode de mesure du mouvement humain par capteurs inertiels pour la programmation par démonstration de tâches de ``pick-and-place''. Les capteurs inertiels sont en effet pratiques et faciles à utiliser, mais sont d'une précision limitée. Nous étudions leur potentiel pour la programmation par démonstration. Pour estimer la trajectoire de la main de l'enseignant, des capteurs inertiels sont placés sur son bras, son avant-bras et sa main afin d'estimer leurs orientations. Une méthode est proposée afin de compenser partiellement la dérive de l'estimation de l'orientation des capteurs autour de la direction de la gravité. Cette méthode, appelée ``heading reset'', est basée sur l'hypothèse que le capteur passe plusieurs fois par son azimut initial avec des phases stationnaires lors d'une démonstration. Cette méthode est implémentée dans un algorithme d'intégration et d'observation de vecteur. Des expériences illustrent les avantages du ``heading reset''. Cette thèse développe ensuite une méthode complète de mesure des mouvements de la main humaine par capteurs inertiels (IHMT). Elle comprend une première procédure d'initialisation pour estimer l'orientation des capteurs par rapport aux segments du bras humain ainsi que l'orientation initiale des capteurs par rapport au repère de référence de l'humain. Cette procédure, consistant en une rotation et une position statique du bras tendu, est robuste au positionnement des capteurs. Une seconde procédure est proposée pour estimer la position de l'humain par rapport au robot et pour calibrer les paramètres de la méthode. Finalement, l'erreur moyenne sur la trajectoire de la main humaine est mesurée expérimentalement entre 28.5 mm et 61.8 mm, ce qui n'est cependant pas suffisant pour la programmation par démonstration. Afin d'atteindre le niveau de précision nécessaire, une nouvelle méthode est développée afin de corriger la trajectoire de la main par IHMT à partir de données issues d'un système de vision, complémentaire des capteurs inertiels. Pour maintenir une certaine simplicité et robustesse, le système de vision ne suit que les objets et pas l'enseignant. La méthode de correction, basée sur des ``Positions Of Interest (POIs)'', est constituée de 3 étapes: l'identification des POIs dans les données issues des capteurs inertiels et du système de vision, puis l'association de POIs liées à la main et de POIs liées aux objets correspondant à la même action, et enfin, la correction de la trajectoire de la main à partir des paires de POIs. Finalement, la méthode IHMT corrigée est expérimentalement évaluée dans un processus complet de programmation par démonstration. Cette expérience montre l'avantage de la méthode proposée sur la kinesthésie dans le contexte de ce travail

    Development of MEMS - based IMU for position estimation: comparison of sensor fusion solutions

    Get PDF
    With the surge of inexpensive, widely accessible, and precise Micro-Electro Mechanical Systems (MEMS) in recent years, inertial systems tracking move ment have become ubiquitous nowadays. Contrary to Global Positioning Sys tem (GPS)-based positioning, Inertial Navigation System (INS) are intrinsically unaffected by signal jamming, blockage susceptibilities, and spoofing. Measure ments from inertial sensors are also acquired at elevated sampling rates and may be numerically integrated to estimate position and orientation knowledge. These measurements are precise on a small-time scale but gradually accumulate errors over extended periods. Combining multiple inertial sensors in a method known as sensor fusion makes it possible to produce a more consistent and dependable un derstanding of the system, decreasing accumulative errors. Several sensor fusion algorithms occur in literature aimed at estimating the Attitude and Heading Reference System (AHRS) of a rigid body with respect to a reference frame. This work describes the development and implementation of a low-cost, multi purpose INS for position and orientation estimation. Additionally, it presents an experimental comparison of a series of sensor fusion solutions and benchmarking their performance on estimating the position of a moving object. Results show a correlation between what sensors are trusted by the algorithm and how well it performed at estimating position. Mahony, SAAM and Tilt algorithms had best general position estimate performance.Com o recente surgimento de sistemas micro-eletromecânico amplamente acessíveis e precisos nos últimos anos, o rastreio de movimento através de sistemas de in erciais tornou-se omnipresente nos dias de hoje. Contrariamente à localização baseada no Sistema de Posicionamento Global (GPS), os Sistemas de Naveg ação Inercial (SNI) não são afetados intrinsecamente pela interferência de sinal, suscetibilidades de bloqueio e falsificação. As medições dos sensores inerciais também são adquiridas a elevadas taxas de amostragem e podem ser integradas numericamente para estimar os conhecimentos de posição e orientação. Estas medições são precisas numa escala de pequena dimensão, mas acumulam grad ualmente erros durante longos períodos. Combinar múltiplos sensores inerci ais num método conhecido como fusão de sensores permite produzir uma mais consistente e confiável compreensão do sistema, diminuindo erros acumulativos. Vários algoritmos de fusão de sensores ocorrem na literatura com o objetivo de estimar os Sistemas de Referência de Atitude e Rumo (SRAR) de um corpo rígido no que diz respeito a uma estrutura de referência. Este trabalho descreve o desenvolvimento e implementação de um sistema multiusos de baixo custo para estimativa de posição e orientação. Além disso, apresenta uma comparação experimental de uma série de soluções de fusão de sensores e compara o seu de sempenho na estimativa da posição de um objeto em movimento. Os resultados mostram uma correlação entre os sensores que são confiados pelo algoritmo e o quão bem ele desempenhou na posição estimada. Os algoritmos Mahony, SAAM e Tilt tiveram o melhor desempenho da estimativa da posição geral

    Embarking on the Autonomous Journey: A Strikingly Engineered Car Control System Design

    Get PDF
    openThis thesis develops an autonomous car control system with Raspberry Pi. Two predictive models are implemented: a convolutional neural network (CNN) using machine learning and an input-based decision tree model using sensor data. The Raspberry Module controls the car hardware and acquires real-time camera data with OpenCV. A dedicated web server and event stream processor process data in real-time using the trained neural network model, facilitating real-time decision-making. Unity and Meta Quest 2 VR set create the VR interface, while a generic DIY kit from Amazon and Raspberry PI provide the car hardware inputs. This research demonstrates the potential of VR in automotive communication, enhancing autonomous car testing and user experience.This thesis develops an autonomous car control system with Raspberry Pi. Two predictive models are implemented: a convolutional neural network (CNN) using machine learning and an input-based decision tree model using sensor data. The Raspberry Module controls the car hardware and acquires real-time camera data with OpenCV. A dedicated web server and event stream processor process data in real-time using the trained neural network model, facilitating real-time decision-making. Unity and Meta Quest 2 VR set create the VR interface, while a generic DIY kit from Amazon and Raspberry PI provide the car hardware inputs. This research demonstrates the potential of VR in automotive communication, enhancing autonomous car testing and user experience
    corecore