19 research outputs found

    Multimodal human hand motion sensing and analysis - a review

    Get PDF

    User Experience Enchanced Interface ad Controller Design for Human-Robot Interaction

    Get PDF
    The robotic technologies have been well developed recently in various ļ¬elds, such as medical services, industrial manufacture and aerospace. Despite their rapid development, how to deal with the uncertain envi-ronment during human-robot interactions eļ¬€ectively still remains un-resolved. The current artiļ¬cial intelligence (AI) technology does not support robots to fulļ¬l complex tasks without humanā€™s guidance. Thus, teleoperation, which means remotely controlling a robot by a human op-erator, is indispensable in many scenarios. It is an important and useful tool in research ļ¬elds. This thesis focuses on the study of designing a user experience (UX) enhanced robot controller, and human-robot in-teraction interfaces that try providing human operators an immersion perception of teleoperation. Several works have been done to achieve the goal.First, to control a telerobot smoothly, a customised variable gain con-trol method is proposed where the stiļ¬€ness of the telerobot varies with the muscle activation level extracted from signals collected by the surface electromyograph(sEMG) devices. Second, two main works are conducted to improve the user-friendliness of the interaction interfaces. One is that force feedback is incorporated into the framework providing operators with haptic feedback to remotely manipulate target objects. Given the high cost of force sensor, in this part of work, a haptic force estimation algorithm is proposed where force sensor is no longer needed. The other main work is developing a visual servo control system, where a stereo camera is mounted on the head of a dual arm robots oļ¬€ering operators real-time working situations. In order to compensate the internal and ex-ternal uncertainties and accurately track the stereo cameraā€™s view angles along planned trajectories, a deterministic learning techniques is utilised, which enables reusing the learnt knowledge before current dynamics changes and thus features increasing the learning eļ¬ƒciency. Third, in-stead of sending commands to the telerobts by joy-sticks, keyboards or demonstrations, the telerobts are controlled directly by the upper limb motion of the human operator in this thesis. Algorithm that utilised the motion signals from inertial measurement unit (IMU) sensor to captures humansā€™ upper limb motion is designed. The skeleton of the operator is detected by Kinect V2 and then transformed and mapped into the joint positions of the controlled robot arm. In this way, the upper limb mo-tion signals from the operator is able to act as reference trajectories to the telerobts. A more superior neural networks (NN) based trajectory controller is also designed to track the generated reference trajectory. Fourth, to further enhance the human immersion perception of teleop-eration, the virtual reality (VR) technique is incorporated such that the operator can make interaction and adjustment of robots easier and more accurate from a robotā€™s perspective.Comparative experiments have been performed to demonstrate the eļ¬€ectiveness of the proposed design scheme. Tests with human subjects were also carried out for evaluating the interface design

    Rehabilitation Engineering

    Get PDF
    Population ageing has major consequences and implications in all areas of our daily life as well as other important aspects, such as economic growth, savings, investment and consumption, labour markets, pensions, property and care from one generation to another. Additionally, health and related care, family composition and life-style, housing and migration are also affected. Given the rapid increase in the aging of the population and the further increase that is expected in the coming years, an important problem that has to be faced is the corresponding increase in chronic illness, disabilities, and loss of functional independence endemic to the elderly (WHO 2008). For this reason, novel methods of rehabilitation and care management are urgently needed. This book covers many rehabilitation support systems and robots developed for upper limbs, lower limbs as well as visually impaired condition. Other than upper limbs, the lower limb research works are also discussed like motorized foot rest for electric powered wheelchair and standing assistance device

    Computational Intelligence in Electromyography Analysis

    Get PDF
    Electromyography (EMG) is a technique for evaluating and recording the electrical activity produced by skeletal muscles. EMG may be used clinically for the diagnosis of neuromuscular problems and for assessing biomechanical and motor control deficits and other functional disorders. Furthermore, it can be used as a control signal for interfacing with orthotic and/or prosthetic devices or other rehabilitation assists. This book presents an updated overview of signal processing applications and recent developments in EMG from a number of diverse aspects and various applications in clinical and experimental research. It will provide readers with a detailed introduction to EMG signal processing techniques and applications, while presenting several new results and explanation of existing algorithms. This book is organized into 18 chapters, covering the current theoretical and practical approaches of EMG research

    Modelling and analysis of hand motion in everyday activities with application to prosthetic hand technology

    Get PDF
    Upper-limb prostheses are either too expensive for many consumers or exhibit a greatly simplified choice of actions, this research aims to enable an improvement in the quality of life for recipients of these devices. Previous attempts at determining the hand shapes performed during activities of daily living (ADL) provide a limited range of tasks studied and data recorded. To avoid these limitations, motion capture systems and machine learning techniques have been utilised throughout this study. A portable motion capture system created, utilising a Leap Motion controller (LMC), has captured natural hand motions during modern ADL. Furthering the use of these data, a method applying optimisation techniques alongside a musculoskeletal model of the hand is proposed for predicting muscle excitations from kinematic data. The LMC was also employed in a device (AirGo) created to measure joint angles, aiming to provide an improvement to joint angle measurements in hand clinics. Hand movements for 22 participants were recorded during ADL over 111 hours and 20 minutes - providing a taxonomy of 40 and 24 hand shapes for the left and right hands, respectively. The predicted muscle excitations produced joint angles with an average correlation of 0.58 to those of the desired hand shapes. AirGo has been successfully employed within a hand therapy clinic to measure digit angles of 11 patients. A taxonomy of the hand shapes used in modern ADL is presented, highlighting the hand shapes currently more appropriate to consider during upper-limb prostheses development. A method for predicting the muscle excitations of the hand from kinematic data is introduced, implemented with data collected during ADL. AirGo offered improved repeatability over traditional devices used for such measurements with greater ease of use

    A survey of the application of soft computing to investment and financial trading

    Get PDF

    Evaluating EEGā€“EMG Fusion-Based Classification as a Method for Improving Control of Wearable Robotic Devices for Upper-Limb Rehabilitation

    Get PDF
    Musculoskeletal disorders are the biggest cause of disability worldwide, and wearable mechatronic rehabilitation devices have been proposed for treatment. However, before widespread adoption, improvements in user control and system adaptability are required. User intention should be detected intuitively, and user-induced changes in system dynamics should be unobtrusively identified and corrected. Developments often focus on model-dependent nonlinear control theory, which is challenging to implement for wearable devices. One alternative is to incorporate bioelectrical signal-based machine learning into the system, allowing for simpler controller designs to be augmented by supplemental brain (electroencephalography/EEG) and muscle (electromyography/EMG) information. To extract user intention better, sensor fusion techniques have been proposed to combine EEG and EMG; however, further development is required to enhance the capabilities of EEGā€“EMG fusion beyond basic motion classification. To this end, the goals of this thesis were to investigate expanded methods of EEGā€“EMG fusion and to develop a novel control system based on the incorporation of EEGā€“EMG fusion classifiers. A dataset of EEG and EMG signals were collected during dynamic elbow flexionā€“extension motions and used to develop EEGā€“EMG fusion models to classify task weight, as well as motion intention. A variety of fusion methods were investigated, such as a Weighted Average decision-level fusion (83.01 Ā± 6.04% accuracy) and Convolutional Neural Network-based input-level fusion (81.57 Ā± 7.11% accuracy), demonstrating that EEGā€“EMG fusion can classify more indirect tasks. A novel control system, referred to as a Task Weight Selective Controller (TWSC), was implemented using a Gain Scheduling-based approach, dictated by external load estimations from an EEGā€“EMG fusion classifier. To improve system stability, classifier prediction debouncing was also proposed to reduce misclassifications through filtering. Performance of the TWSC was evaluated using a developed upper-limb brace simulator. Due to simulator limitations, no significant difference in error was observed between the TWSC and PID control. However, results did demonstrate the feasibility of prediction debouncing, showing it provided smoother device motion. Continued development of the TWSC, and EEGā€“EMG fusion techniques will ultimately result in wearable devices that are able to adapt to changing loads more effectively, serving to improve the user experience during operation

    Proceedings of the Scientific-Practical Conference "Research and Development - 2016"

    Get PDF
    talent management; sensor arrays; automatic speech recognition; dry separation technology; oil production; oil waste; laser technolog

    Towards adaptive and autonomous humanoid robots: from vision to actions

    Get PDF
    Although robotics research has seen advances over the last decades robots are still not in widespread use outside industrial applications. Yet a range of proposed scenarios have robots working together, helping and coexisting with humans in daily life. In all these a clear need to deal with a more unstructured, changing environment arises. I herein present a system that aims to overcome the limitations of highly complex robotic systems, in terms of autonomy and adaptation. The main focus of research is to investigate the use of visual feedback for improving reaching and grasping capabilities of complex robots. To facilitate this a combined integration of computer vision and machine learning techniques is employed. From a robot vision point of view the combination of domain knowledge from both imaging processing and machine learning techniques, can expand the capabilities of robots. I present a novel framework called Cartesian Genetic Programming for Image Processing (CGP-IP). CGP-IP can be trained to detect objects in the incoming camera streams and successfully demonstrated on many different problem domains. The approach requires only a few training images (it was tested with 5 to 10 images per experiment) is fast, scalable and robust yet requires very small training sets. Additionally, it can generate human readable programs that can be further customized and tuned. While CGP-IP is a supervised-learning technique, I show an integration on the iCub, that allows for the autonomous learning of object detection and identification. Finally this dissertation includes two proof-of-concepts that integrate the motion and action sides. First, reactive reaching and grasping is shown. It allows the robot to avoid obstacles detected in the visual stream, while reaching for the intended target object. Furthermore the integration enables us to use the robot in non-static environments, i.e. the reaching is adapted on-the- fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. The second integration highlights the capabilities of these frameworks, by improving the visual detection by performing object manipulation actions
    corecore