290 research outputs found

    Fast human motion prediction for human-robot collaboration with wearable interfaces

    Full text link
    In this paper, we aim at improving human motion prediction during human-robot collaboration in industrial facilities by exploiting contributions from both physical and physiological signals. Improved human-machine collaboration could prove useful in several areas, while it is crucial for interacting robots to understand human movement as soon as possible to avoid accidents and injuries. In this perspective, we propose a novel human-robot interface capable to anticipate the user intention while performing reaching movements on a working bench in order to plan the action of a collaborative robot. The proposed interface can find many applications in the Industry 4.0 framework, where autonomous and collaborative robots will be an essential part of innovative facilities. A motion intention prediction and a motion direction prediction levels have been developed to improve detection speed and accuracy. A Gaussian Mixture Model (GMM) has been trained with IMU and EMG data following an evidence accumulation approach to predict reaching direction. Novel dynamic stopping criteria have been proposed to flexibly adjust the trade-off between early anticipation and accuracy according to the application. The output of the two predictors has been used as external inputs to a Finite State Machine (FSM) to control the behaviour of a physical robot according to user's action or inaction. Results show that our system outperforms previous methods, achieving a real-time classification accuracy of 94.3±2.9%94.3\pm2.9\% after 160.0msec±80.0msec160.0msec\pm80.0msec from movement onset

    Comparison of machine learning algorithms for EMG signal classification

    Get PDF
    The use of muscle activation signals in the control loop in biomechatronics systems is extremely important for effective and stable control. One of the methods used for this purpose is motion classification using electromyography (EMG) signals that reflect muscle activation. Classifying these signals with variable amplitude and frequency is a difficult process. On the other hand, EMG signal characteristics change over time depending on the person, task and duration. Various artificial intelligence-based methods are used for movement classification. One of these methods is machine learning. In this study, a total of 24 different models of 6 main machine learning algorithms were used for motion classification. With these models, 7 different wrist movements (rest, grip, flexion, extension, radial deviation, ulnar deviation, expanded palm) are classified. Test studies were carried out with 8 channels of EMG data taken from 4 subjects. Classification performances were compared in terms of classification accuracy and training time parameters. According to the simulation results, the Ensemble algorithm Bagged Trees model has been shown to have the highest classification performance with an average classification accuracy of 98.55%

    SEMG based intention identification of complex hand motion using nonlinear time series analysis

    Get PDF

    Human skill capturing and modelling using wearable devices

    Get PDF
    Industrial robots are delivering more and more manipulation services in manufacturing. However, when the task is complex, it is difficult to programme a robot to fulfil all the requirements because even a relatively simple task such as a peg-in-hole insertion contains many uncertainties, e.g. clearance, initial grasping position and insertion path. Humans, on the other hand, can deal with these variations using their vision and haptic feedback. Although humans can adapt to uncertainties easily, most of the time, the skilled based performances that relate to their tacit knowledge cannot be easily articulated. Even though the automation solution may not fully imitate human motion since some of them are not necessary, it would be useful if the skill based performance from a human could be firstly interpreted and modelled, which will then allow it to be transferred to the robot. This thesis aims to reduce robot programming efforts significantly by developing a methodology to capture, model and transfer the manual manufacturing skills from a human demonstrator to the robot. Recently, Learning from Demonstration (LfD) is gaining interest as a framework to transfer skills from human teacher to robot using probability encoding approaches to model observations and state transition uncertainties. In close or actual contact manipulation tasks, it is difficult to reliabley record the state-action examples without interfering with the human senses and activities. Therefore, wearable sensors are investigated as a promising device to record the state-action examples without restricting the human experts during the skilled execution of their tasks. Firstly to track human motions accurately and reliably in a defined 3-dimensional workspace, a hybrid system of Vicon and IMUs is proposed to compensate for the known limitations of the individual system. The data fusion method was able to overcome occlusion and frame flipping problems in the two camera Vicon setup and the drifting problem associated with the IMUs. The results indicated that occlusion and frame flipping problems associated with Vicon can be mitigated by using the IMU measurements. Furthermore, the proposed method improves the Mean Square Error (MSE) tracking accuracy range from 0.8˚ to 6.4˚ compared with the IMU only method. Secondly, to record haptic feedback from a teacher without physically obstructing their interactions with the workpiece, wearable surface electromyography (sEMG) armbands were used as an indirect method to indicate contact feedback during manual manipulations. A muscle-force model using a Time Delayed Neural Network (TDNN) was built to map the sEMG signals to the known contact force. The results indicated that the model was capable of estimating the force from the sEMG armbands in the applications of interest, namely in peg-in-hole and beater winding tasks, with MSE of 2.75N and 0.18N respectively. Finally, given the force estimation and the motion trajectories, a Hidden Markov Model (HMM) based approach was utilised as a state recognition method to encode and generalise the spatial and temporal information of the skilled executions. This method would allow a more representative control policy to be derived. A modified Gaussian Mixture Regression (GMR) method was then applied to enable motions reproduction by using the learned state-action policy. To simplify the validation procedure, instead of using the robot, additional demonstrations from the teacher were used to verify the reproduction performance of the policy, by assuming human teacher and robot learner are physical identical systems. The results confirmed the generalisation capability of the HMM model across a number of demonstrations from different subjects; and the reproduced motions from GMR were acceptable in these additional tests. The proposed methodology provides a framework for producing a state-action model from skilled demonstrations that can be translated into robot kinematics and joint states for the robot to execute. The implication to industry is reduced efforts and time in programming the robots for applications where human skilled performances are required to cope robustly with various uncertainties during tasks execution

    Novel Methods for Surface EMG Analysis and Exploration Based on Multi-Modal Gaussian Mixture Models

    Get PDF
    <div><p>This paper introduces a new method for data analysis of animal muscle activation during locomotion. It is based on fitting Gaussian mixture models (GMMs) to surface EMG data (sEMG). This approach enables researchers/users to isolate parts of the overall muscle activation within locomotion EMG data. Furthermore, it provides new opportunities for analysis and exploration of sEMG data by using the resulting Gaussian modes as atomic building blocks for a hierarchical clustering. In our experiments, composite peak models representing the general activation pattern per sensor location (one sensor on the long back muscle, three sensors on the gluteus muscle on each body side) were identified per individual for all 14 horses during walk and trot in the present study. Hereby we show the applicability of the method to identify composite peak models, which describe activation of different muscles throughout cycles of locomotion.</p></div

    EXPERIMENTAL-COMPUTATIONAL ANALYSIS OF VIGILANCE DYNAMICS FOR APPLICATIONS IN SLEEP AND EPILEPSY

    Get PDF
    Epilepsy is a neurological disorder characterized by recurrent seizures. Sleep problems can cooccur with epilepsy, and adversely affect seizure diagnosis and treatment. In fact, the relationship between sleep and seizures in individuals with epilepsy is a complex one. Seizures disturb sleep and sleep deprivation aggravates seizures. Antiepileptic drugs may also impair sleep quality at the cost of controlling seizures. In general, particular vigilance states may inhibit or facilitate seizure generation, and changes in vigilance state can affect the predictability of seizures. A clear understanding of sleep-seizure interactions will therefore benefit epilepsy care providers and improve quality of life in patients. Notable progress in neuroscience research—and particularly sleep and epilepsy—has been achieved through experimentation on animals. Experimental models of epilepsy provide us with the opportunity to explore or even manipulate the sleep-seizure relationship in order to decipher different aspects of their interactions. Important in this process is the development of techniques for modeling and tracking sleep dynamics using electrophysiological measurements. In this dissertation experimental and computational approaches are proposed for modeling vigilance dynamics and their utility demonstrated in nonepileptic control mice. The general framework of hidden Markov models is used to automatically model and track sleep state and dynamics from electrophysiological as well as novel motion measurements. In addition, a closed-loop sensory stimulation technique is proposed that, in conjunction with this model, provides the means to concurrently track and modulate 3 vigilance dynamics in animals. The feasibility of the proposed techniques for modeling and altering sleep are demonstrated for experimental applications related to epilepsy. Finally, preliminary data from a mouse model of temporal lobe epilepsy are employed to suggest applications of these techniques and directions for future research. The methodologies developed here have clear implications the design of intelligent neuromodulation strategies for clinical epilepsy therapy

    Subject-Independent Frameworks for Robotic Devices: Applying Robot Learning to EMG Signals

    Get PDF
    The capability of having human and robots cooperating together has increased the interest in the control of robotic devices by means of physiological human signals. In order to achieve this goal it is crucial to be able to catch the human intention of movement and to translate it in a coherent robot action. Up to now, the classical approach when considering physiological signals, and in particular EMG signals, is to focus on the specific subject performing the task since the great complexity of these signals. This thesis aims to expand the state of the art by proposing a general subject-independent framework, able to extract the common constraints of human movement by looking at several demonstration by many different subjects. The variability introduced in the system by multiple demonstrations from many different subjects allows the construction of a robust model of human movement, able to face small variations and signal deterioration. Furthermore, the obtained framework could be used by any subject with no need for long training sessions. The signals undergo to an accurate preprocessing phase, in order to remove noise and artefacts. Following this procedure, we are able to extract significant information to be used in online processes. The human movement can be estimated by using well-established statistical methods in Robot Programming by Demonstration applications, in particular the input can be modelled by using a Gaussian Mixture Model (GMM). The performed movement can be continuously estimated with a Gaussian Mixture Regression (GMR) technique, or it can be identified among a set of possible movements with a Gaussian Mixture Classification (GMC) approach. We improved the results by incorporating some previous information in the model, in order to enriching the knowledge of the system. In particular we considered the hierarchical information provided by a quantitative taxonomy of hand grasps. Thus, we developed the first quantitative taxonomy of hand grasps considering both muscular and kinematic information from 40 subjects. The results proved the feasibility of a subject-independent framework, even by considering physiological signals, like EMG, from a wide number of participants. The proposed solution has been used in two different kinds of applications: (I) for the control of prosthesis devices, and (II) in an Industry 4.0 facility, in order to allow human and robot to work alongside or to cooperate. Indeed, a crucial aspect for making human and robots working together is their mutual knowledge and anticipation of other’s task, and physiological signals are capable to provide a signal even before the movement is started. In this thesis we proposed also an application of Robot Programming by Demonstration in a real industrial facility, in order to optimize the production of electric motor coils. The task was part of the European Robotic Challenge (EuRoC), and the goal was divided in phases of increasing complexity. This solution exploits Machine Learning algorithms, like GMM, and the robustness was assured by considering demonstration of the task from many subjects. We have been able to apply an advanced research topic to a real factory, achieving promising results

    Patterns in Motion - From the Detection of Primitives to Steering Animations

    Get PDF
    In recent decades, the world of technology has developed rapidly. Illustrative of this trend is the growing number of affrdable methods for recording new and bigger data sets. The resulting masses of multivariate and high-dimensional data represent a new challenge for research and industry. This thesis is dedicated to the development of novel methods for processing multivariate time series data, thus meeting this Data Science related challenge. This is done by introducing a range of different methods designed to deal with time series data. The variety of methods re ects the different requirements and the typical stage of data processing ranging from pre-processing to post- processing and data recycling. Many of the techniques introduced work in a general setting. However, various types of motion recordings of human and animal subjects were chosen as representatives of multi-variate time series. The different data modalities include Motion Capture data, accelerations, gyroscopes, electromyography, depth data (Kinect) and animated 3D-meshes. It is the goal of this thesis to provide a deeper understanding of working with multi-variate time series by taking the example of multi-variate motion data. However, in order to maintain an overview of the matter, the thesis follows a basic general pipeline. This pipeline was developed as a guideline for time series processing and is the first contribution of this work. Each part of the thesis represents one important stage of this pipeline which can be summarized under the topics segmentation, analysis and synthesis. Specific examples of different data modalities, processing requirements and methods to meet those are discussed in the chapters of the respective parts. One important contribution of this thesis is a novel method for temporal segmentation of motion data. It is based on the idea of self-similarities within motion data and is capable of unsupervised segmentation of range of motion data into distinct activities and motion primitives. The examples concerned with the analysis of multi-variate time series re ect the role of data analysis in different inter-disciplinary contexts and also the variety of requirements that comes with collaboration with other sciences. These requirements are directly connected to current challenges in data science. Finally, the problem of synthesis of multi-variate time series is discussed using a graph-based example and examples related to rigging or steering of meshes. Synthesis is an important stage in data processing because it creates new data from existing ones in a controlled way. This makes exploiting existing data sets and and access of more condensed data possible, thus providing feasible alternatives to otherwise time-consuming manual processing.Muster in Bewegung - Von der Erkennung von Primitiven zur Steuerung von Animationen In den letzten Jahrzehnten hat sich die Welt der Technologie rapide entwickelt. Beispielhaft fĂŒr diese Entwicklung ist die wachsende Zahl erschwinglicher Methoden zum Aufzeichnen neuer und immer grĂ¶ĂŸerer Datenmengen. Die sich daraus ergebenden Massen multivariater und hochdimensionaler Daten stellen Forschung wie Industrie vor neuartige Probleme. Diese Arbeit ist der Entwicklung neuer Verfahren zur Verarbeitung multivariater Zeitreihen gewidmet und stellt sich damit einer großen Herausforderung, welche unmittelbar mit dem neuen Feld der sogenannten Data Science verbunden ist. In ihr werden ein Reihe von verschiedenen Verfahren zur Verarbeitung multivariater Zeitserien eingefĂŒhrt. Die verschiedenen Verfahren gehen jeweils auf unterschiedliche Anforderungen und typische Stadien der Datenverarbeitung ein und reichen von Vorverarbeitung bis zur Nachverarbeitung und darĂŒber hinaus zur Wiederverwertung. Viele der vorgestellten Techniken eignen sich zur Verarbeitung allgemeiner multivariater Zeitreihen. Allerdings wurden hier eine Anzahl verschiedenartiger Aufnahmen von menschlichen und tierischen Subjekte ausgewĂ€hlt, welche als Vertreter fĂŒr allgemeine multivariate Zeitreihen gelten können. Zu den unterschiedlichen ModalitĂ€ten der Aufnahmen gehören Motion Capture Daten, Beschleunigungen, Gyroskopdaten, Elektromyographie, Tiefenbilder ( Kinect ) und animierte 3D -Meshes. Es ist das Ziel dieser Arbeit, am Beispiel der multivariaten Bewegungsdaten ein tieferes Verstndnis fĂŒr den Umgang mit multivariaten Zeitreihen zu vermitteln. Um jedoch einen Überblick ber die Materie zu wahren, folgt sie jedoch einer grundlegenden und allgemeinen Pipeline. Diese Pipeline wurde als Leitfaden fĂŒr die Verarbeitung von Zeitreihen entwickelt und ist der erste Beitrag dieser Arbeit. Jeder weitere Teil der Arbeit behandelt eine von drei grĂ¶ĂŸeren Stationen in der Pipeline, welche sich unter unter die Themen Segmentierung, Analyse und Synthese eingliedern lassen. Beispiele verschiedener DatenmodalitĂ€ten und Anforderungen an ihre Verarbeitung erlĂ€utern die jeweiligen Verfahren. Ein wichtiger Beitrag dieser Arbeit ist ein neuartiges Verfahren zur zeitlichen Segmentierung von Bewegungsdaten. Dieses basiert auf der Idee der SelbstĂ€hnlichkeit von Bewegungsdaten und ist in der Lage, verschiedenste Bewegungsdaten voll-automatisch in unterschiedliche AktivitĂ€ten und Bewegungs-Primitive zu zerlegen. Die Beispiele fr die Analyse multivariater Zeitreihen spiegeln die Rolle der Datenanalyse in verschiedenen interdisziplinĂ€ren ZusammenhĂ€nge besonders wider und illustrieren auch die Vielfalt der Anforderungen, die sich in interdisziplinĂ€ren Kontexten auftun. Schließlich wird das Problem der Synthese multivariater Zeitreihen unter Verwendung eines graph-basierten und eines Steering Beispiels diskutiert. Synthese ist insofern ein wichtiger Schritt in der Datenverarbeitung, da sie es erlaubt, auf kontrollierte Art neue Daten aus vorhandenen zu erzeugen. Dies macht die Nutzung bestehender DatensĂ€tze und den Zugang zu dichteren Datenmodellen möglich, wodurch Alternativen zur ansonsten zeitaufwendigen manuellen Verarbeitung aufgezeigt werden
    • 

    corecore