40 research outputs found

    A survey of the application of soft computing to investment and financial trading

    Get PDF

    Human skill capturing and modelling using wearable devices

    Get PDF
    Industrial robots are delivering more and more manipulation services in manufacturing. However, when the task is complex, it is difficult to programme a robot to fulfil all the requirements because even a relatively simple task such as a peg-in-hole insertion contains many uncertainties, e.g. clearance, initial grasping position and insertion path. Humans, on the other hand, can deal with these variations using their vision and haptic feedback. Although humans can adapt to uncertainties easily, most of the time, the skilled based performances that relate to their tacit knowledge cannot be easily articulated. Even though the automation solution may not fully imitate human motion since some of them are not necessary, it would be useful if the skill based performance from a human could be firstly interpreted and modelled, which will then allow it to be transferred to the robot. This thesis aims to reduce robot programming efforts significantly by developing a methodology to capture, model and transfer the manual manufacturing skills from a human demonstrator to the robot. Recently, Learning from Demonstration (LfD) is gaining interest as a framework to transfer skills from human teacher to robot using probability encoding approaches to model observations and state transition uncertainties. In close or actual contact manipulation tasks, it is difficult to reliabley record the state-action examples without interfering with the human senses and activities. Therefore, wearable sensors are investigated as a promising device to record the state-action examples without restricting the human experts during the skilled execution of their tasks. Firstly to track human motions accurately and reliably in a defined 3-dimensional workspace, a hybrid system of Vicon and IMUs is proposed to compensate for the known limitations of the individual system. The data fusion method was able to overcome occlusion and frame flipping problems in the two camera Vicon setup and the drifting problem associated with the IMUs. The results indicated that occlusion and frame flipping problems associated with Vicon can be mitigated by using the IMU measurements. Furthermore, the proposed method improves the Mean Square Error (MSE) tracking accuracy range from 0.8˚ to 6.4˚ compared with the IMU only method. Secondly, to record haptic feedback from a teacher without physically obstructing their interactions with the workpiece, wearable surface electromyography (sEMG) armbands were used as an indirect method to indicate contact feedback during manual manipulations. A muscle-force model using a Time Delayed Neural Network (TDNN) was built to map the sEMG signals to the known contact force. The results indicated that the model was capable of estimating the force from the sEMG armbands in the applications of interest, namely in peg-in-hole and beater winding tasks, with MSE of 2.75N and 0.18N respectively. Finally, given the force estimation and the motion trajectories, a Hidden Markov Model (HMM) based approach was utilised as a state recognition method to encode and generalise the spatial and temporal information of the skilled executions. This method would allow a more representative control policy to be derived. A modified Gaussian Mixture Regression (GMR) method was then applied to enable motions reproduction by using the learned state-action policy. To simplify the validation procedure, instead of using the robot, additional demonstrations from the teacher were used to verify the reproduction performance of the policy, by assuming human teacher and robot learner are physical identical systems. The results confirmed the generalisation capability of the HMM model across a number of demonstrations from different subjects; and the reproduced motions from GMR were acceptable in these additional tests. The proposed methodology provides a framework for producing a state-action model from skilled demonstrations that can be translated into robot kinematics and joint states for the robot to execute. The implication to industry is reduced efforts and time in programming the robots for applications where human skilled performances are required to cope robustly with various uncertainties during tasks execution

    Multi-modal EMG-based hand gesture classification for the control of a robotic prosthetic hand

    Get PDF
    Upper-limb myoelectric prosthesis control utilises electromyography (EMG) signals as input and applies statistical and machine learning techniques to intuitively identify the user’s intended grasp. Surface EMG signals recorded with electrodes attached on the user’s skin have been successfully used for prostheses control in controlled lab conditions for decades. However, due to the stochastic and non-stationary nature of the EMG signal, clinical use of pattern recognition myoelectric control in everyday life conditions is limited. This thesis performs an extensive literature review presenting the main causes of the drift of EMG signals over time, ways of detecting such drifts and possible techniques to counteract for their effects in the application of upper limb prostheses. Three approaches are investigated to provide more robust classification performance under conditions of EMG signal drift; improving the classifier, in corporating extra sensory modalities and utilising transfer learning techniques to improve between-subjects classification performance. Linear Discriminant Analysis (LDA), is the baseline algorithm in myoelectric grasp classification applications, providing good performance with low computational requirements. However, it assumes Gaussian distribution and shared co-variance between different classes, and its performance relies on hand-engineered features. Deep Neural Networks (DNNs) have the advantage of learning the features while training the classifier. In this thesis two deep learning models have been successfully implemented for the grasp classification of EMG signals achieving better performance than the baseline LDA algorithm. Moreover, deep neural networks provide an easy basis for transfer learning knowledge and improving the adaptation capabilities of the classifier. An adaptation approach is suggested and tested on the inter-subject classification task, demonstrating better performance when utilising pre-trained neural networks. Finally research has suggested that adding extra sensory modalities along EMG, like Inertial Measurement Unit (IMU) data, improves the classification performance of a classifier in comparison to utilising only EMG data for training. In this thesis ways of incorporating different sensory modalities have been suggested, both for the LDA classifier and the DNNs, demonstrating the benefit of multi-modal grasp classifier.The Edinburgh Centre for Robotics and EPSR

    Computational Intelligence in Electromyography Analysis

    Get PDF
    Electromyography (EMG) is a technique for evaluating and recording the electrical activity produced by skeletal muscles. EMG may be used clinically for the diagnosis of neuromuscular problems and for assessing biomechanical and motor control deficits and other functional disorders. Furthermore, it can be used as a control signal for interfacing with orthotic and/or prosthetic devices or other rehabilitation assists. This book presents an updated overview of signal processing applications and recent developments in EMG from a number of diverse aspects and various applications in clinical and experimental research. It will provide readers with a detailed introduction to EMG signal processing techniques and applications, while presenting several new results and explanation of existing algorithms. This book is organized into 18 chapters, covering the current theoretical and practical approaches of EMG research

    Biomechatronics: Harmonizing Mechatronic Systems with Human Beings

    Get PDF
    This eBook provides a comprehensive treatise on modern biomechatronic systems centred around human applications. A particular emphasis is given to exoskeleton designs for assistance and training with advanced interfaces in human-machine interaction. Some of these designs are validated with experimental results which the reader will find very informative as building-blocks for designing such systems. This eBook will be ideally suited to those researching in biomechatronic area with bio-feedback applications or those who are involved in high-end research on manmachine interfaces. This may also serve as a textbook for biomechatronic design at post-graduate level

    Optimized Biosignals Processing Algorithms for New Designs of Human Machine Interfaces on Parallel Ultra-Low Power Architectures

    Get PDF
    The aim of this dissertation is to explore Human Machine Interfaces (HMIs) in a variety of biomedical scenarios. The research addresses typical challenges in wearable and implantable devices for diagnostic, monitoring, and prosthetic purposes, suggesting a methodology for tailoring such applications to cutting edge embedded architectures. The main challenge is the enhancement of high-level applications, also introducing Machine Learning (ML) algorithms, using parallel programming and specialized hardware to improve the performance. The majority of these algorithms are computationally intensive, posing significant challenges for the deployment on embedded devices, which have several limitations in term of memory size, maximum operative frequency, and battery duration. The proposed solutions take advantage of a Parallel Ultra-Low Power (PULP) architecture, enhancing the elaboration on specific target architectures, heavily optimizing the execution, exploiting software and hardware resources. The thesis starts by describing a methodology that can be considered a guideline to efficiently implement algorithms on embedded architectures. This is followed by several case studies in the biomedical field, starting with the analysis of a Hand Gesture Recognition, based on the Hyperdimensional Computing algorithm, which allows performing a fast on-chip re-training, and a comparison with the state-of-the-art Support Vector Machine (SVM); then a Brain Machine Interface (BCI) to detect the respond of the brain to a visual stimulus follows in the manuscript. Furthermore, a seizure detection application is also presented, exploring different solutions for the dimensionality reduction of the input signals. The last part is dedicated to an exploration of typical modules for the development of optimized ECG-based applications

    Silent Speech Interfaces for Speech Restoration: A Review

    Get PDF
    This work was supported in part by the Agencia Estatal de Investigacion (AEI) under Grant PID2019-108040RB-C22/AEI/10.13039/501100011033. The work of Jose A. Gonzalez-Lopez was supported in part by the Spanish Ministry of Science, Innovation and Universities under Juan de la Cierva-Incorporation Fellowship (IJCI-2017-32926).This review summarises the status of silent speech interface (SSI) research. SSIs rely on non-acoustic biosignals generated by the human body during speech production to enable communication whenever normal verbal communication is not possible or not desirable. In this review, we focus on the first case and present latest SSI research aimed at providing new alternative and augmentative communication methods for persons with severe speech disorders. SSIs can employ a variety of biosignals to enable silent communication, such as electrophysiological recordings of neural activity, electromyographic (EMG) recordings of vocal tract movements or the direct tracking of articulator movements using imaging techniques. Depending on the disorder, some sensing techniques may be better suited than others to capture speech-related information. For instance, EMG and imaging techniques are well suited for laryngectomised patients, whose vocal tract remains almost intact but are unable to speak after the removal of the vocal folds, but fail for severely paralysed individuals. From the biosignals, SSIs decode the intended message, using automatic speech recognition or speech synthesis algorithms. Despite considerable advances in recent years, most present-day SSIs have only been validated in laboratory settings for healthy users. Thus, as discussed in this paper, a number of challenges remain to be addressed in future research before SSIs can be promoted to real-world applications. If these issues can be addressed successfully, future SSIs will improve the lives of persons with severe speech impairments by restoring their communication capabilities.Agencia Estatal de Investigacion (AEI) PID2019-108040RB-C22/AEI/10.13039/501100011033Spanish Ministry of Science, Innovation and Universities under Juan de la Cierva-Incorporation Fellowship IJCI-2017-3292

    Smart Sensors for Healthcare and Medical Applications

    Get PDF
    This book focuses on new sensing technologies, measurement techniques, and their applications in medicine and healthcare. Specifically, the book briefly describes the potential of smart sensors in the aforementioned applications, collecting 24 articles selected and published in the Special Issue “Smart Sensors for Healthcare and Medical Applications”. We proposed this topic, being aware of the pivotal role that smart sensors can play in the improvement of healthcare services in both acute and chronic conditions as well as in prevention for a healthy life and active aging. The articles selected in this book cover a variety of topics related to the design, validation, and application of smart sensors to healthcare
    corecore