1,406 research outputs found

    Multimodal human hand motion sensing and analysis - a review

    Get PDF

    Understanding of Object Manipulation Actions Using Human Multi-Modal Sensory Data

    Full text link
    Object manipulation actions represent an important share of the Activities of Daily Living (ADLs). In this work, we study how to enable service robots to use human multi-modal data to understand object manipulation actions, and how they can recognize such actions when humans perform them during human-robot collaboration tasks. The multi-modal data in this study consists of videos, hand motion data, applied forces as represented by the pressure patterns on the hand, and measurements of the bending of the fingers, collected as human subjects performed manipulation actions. We investigate two different approaches. In the first one, we show that multi-modal signal (motion, finger bending and hand pressure) generated by the action can be decomposed into a set of primitives that can be seen as its building blocks. These primitives are used to define 24 multi-modal primitive features. The primitive features can in turn be used as an abstract representation of the multi-modal signal and employed for action recognition. In the latter approach, the visual features are extracted from the data using a pre-trained image classification deep convolutional neural network. The visual features are subsequently used to train the classifier. We also investigate whether adding data from other modalities produces a statistically significant improvement in the classifier performance. We show that both approaches produce a comparable performance. This implies that image-based methods can successfully recognize human actions during human-robot collaboration. On the other hand, in order to provide training data for the robot so it can learn how to perform object manipulation actions, multi-modal data provides a better alternative

    Bio-signal based control in assistive robots: a survey

    Get PDF
    Recently, bio-signal based control has been gradually deployed in biomedical devices and assistive robots for improving the quality of life of disabled and elderly people, among which electromyography (EMG) and electroencephalography (EEG) bio-signals are being used widely. This paper reviews the deployment of these bio-signals in the state of art of control systems. The main aim of this paper is to describe the techniques used for (i) collecting EMG and EEG signals and diving these signals into segments (data acquisition and data segmentation stage), (ii) dividing the important data and removing redundant data from the EMG and EEG segments (feature extraction stage), and (iii) identifying categories from the relevant data obtained in the previous stage (classification stage). Furthermore, this paper presents a summary of applications controlled through these two bio-signals and some research challenges in the creation of these control systems. Finally, a brief conclusion is summarized

    Automatic Recognition of Concurrent and Coupled Human Motion Sequences

    Get PDF
    We developed methods and algorithms for all parts of a motion recognition system, i. e. Feature Extraction, Motion Segmentation and Labeling, Motion Primitive and Context Modeling as well as Decoding. We collected several datasets to compare our proposed methods with the state-of-the-art in human motion recognition. The main contributions of this thesis are a structured functional motion decomposition and a flexible and scalable motion recognition system suitable for a Humanoid Robot

    Learning object, grasping and manipulation activities using hierarchical HMMs

    Full text link
    This article presents a probabilistic algorithm for representing and learning complex manipulation activities performed by humans in everyday life. The work builds on the multi-level Hierarchical Hidden Markov Model (HHMM) framework which allows decomposition of longer-term complex manipulation activities into layers of abstraction whereby the building blocks can be represented by simpler action modules called action primitives. This way, human task knowledge can be synthesised in a compact, effective representation suitable, for instance, to be subsequently transferred to a robot for imitation. The main contribution is the use of a robust framework capable of dealing with the uncertainty or incomplete data inherent to these activities, and the ability to represent behaviours at multiple levels of abstraction for enhanced task generalisation. Activity data from 3D video sequencing of human manipulation of different objects handled in everyday life is used for evaluation. A comparison with a mixed generative-discriminative hybrid model HHMM/SVM (support vector machine) is also presented to add rigour in highlighting the benefit of the proposed approach against comparable state of the art techniques. © 2014 Springer Science+Business Media New York
    • …
    corecore