469 research outputs found

    Myoelectric forearm prostheses: State of the art from a user-centered perspective

    Get PDF
    User acceptance of myoelectric forearm prostheses is currently low. Awkward control, lack of feedback, and difficult training are cited as primary reasons. Recently, researchers have focused on exploiting the new possibilities offered by advancements in prosthetic technology. Alternatively, researchers could focus on prosthesis acceptance by developing functional requirements based on activities users are likely to perform. In this article, we describe the process of determining such requirements and then the application of these requirements to evaluating the state of the art in myoelectric forearm prosthesis research. As part of a needs assessment, a workshop was organized involving clinicians (representing end users), academics, and engineers. The resulting needs included an increased number of functions, lower reaction and execution times, and intuitiveness of both control and feedback systems. Reviewing the state of the art of research in the main prosthetic subsystems (electromyographic [EMG] sensing, control, and feedback) showed that modern research prototypes only partly fulfill the requirements. We found that focus should be on validating EMG-sensing results with patients, improving simultaneous control of wrist movements and grasps, deriving optimal parameters for force and position feedback, and taking into account the psychophysical aspects of feedback, such as intensity perception and spatial acuity

    Multimodal human hand motion sensing and analysis - a review

    Get PDF

    A complex network approach to stylometry

    Get PDF
    Statistical methods have been widely employed to study the fundamental properties of language. In recent years, methods from complex and dynamical systems proved useful to create several language models. Despite the large amount of studies devoted to represent texts with physical models, only a limited number of studies have shown how the properties of the underlying physical systems can be employed to improve the performance of natural language processing tasks. In this paper, I address this problem by devising complex networks methods that are able to improve the performance of current statistical methods. Using a fuzzy classification strategy, I show that the topological properties extracted from texts complement the traditional textual description. In several cases, the performance obtained with hybrid approaches outperformed the results obtained when only traditional or networked methods were used. Because the proposed model is generic, the framework devised here could be straightforwardly used to study similar textual applications where the topology plays a pivotal role in the description of the interacting agents.Comment: PLoS ONE, 2015 (to appear

    Programming-by-demonstration and adaptation of robot skills by fuzzy-time-modeling

    Get PDF
    Proceedings of: 2011 IEEE Workshop on Robotic Intelligence in Informationally Structured Space (RiiS 2011 MDCM), April 11-15, 2011, Paris (France)Complex robot tasks can be partitioned into motion primitives or robot skills that can directly be learned and recognized through Programming-by-Demonstration (PbD) by a human operator who demonstrates a set of reference skills. Robot motions are recorded by a data-capturing system and modeled by a specific fuzzy clustering and modeling technique where skill models use time instants as inputs and operator actions as outputs. In the recognition phase the robot identifies the skill shown by the operator in a novel test demonstration. Skill models are updated online during the execution of skills using the Broyden update formula. This method is extended for fuzzy models especially for time cluster models. The updated model is used for further executions of the same skill.European Community's Seventh Framework Progra

    Adaptive fuzzy Gaussian mixture models for shape approximation in Robot Grasping

    Get PDF
    Robotic grasping has always been a challenging task for both service and industrial robots. The ability of grasp planning for novel objects is necessary for a robot to autonomously perform grasps under unknown environments.In this work, we consider the task of grasp planning for a parallel gripper to grasp a novel object, given an RGB image and its corresponding depth image taken from a single view. In this paper, we show that this problem can be simplified by modeling a novel object as a set of simple shape primitives, such as ellipses. We adopt fuzzy Gaussian mixture models (GMMs) for novel objects’ shape approximation. With the obtained GMM, we decompose the object into several ellipses, while each ellipse is corresponding to a grasping rectangle. After comparing the grasp quality among these rectangles, we will obtain the most proper part for a gripper to grasp. Extensive experiments on a real robotic platform demonstrate that our algorithm assists the robot to grasp a variety of novel objects with good grasp quality and computational efficiency

    Predicting human intention in visual observations of hand/object interactions

    Full text link
    Abstract—The main contribution of this paper is a prob-abilistic method for predicting human manipulation intention from image sequences of human-object interaction. Predicting intention amounts to inferring the imminent manipulation task when human hand is observed to have stably grasped the object. Inference is performed by means of a probabilistic graphical model that encodes object grasping tasks over the 3D state of the observed scene. The 3D state is extracted from RGB-D image sequences by a novel vision-based, markerless hand-object 3D tracking framework. To deal with the high-dimensional state-space and mixed data types (discrete and continuous) involved in grasping tasks, we introduce a generative vector quantization method using mixture models and self-organizing maps. This yields a compact model for encoding of grasping actions, able of handling uncertain and partial sensory data. Experimentation showed that the model trained on simulated data can provide a potent basis for accurate goal-inference with partial and noisy observations of actual real-world demonstrations. We also show a grasp selection process, guided by the inferred human intention, to illustrate the use of the system for goal-directed grasp imitation. I

    Grasps recognition and evaluation of stroke patients for supporting rehabilitation therapy

    Get PDF
    Copyright © 2014 Beatriz Leon et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Stroke survivors often suffer impairments on their wrist and hand. Robot-mediated rehabilitation techniques have been proposed as a way to enhance conventional therapy, based on intensive repeated movements. Amongst the set of activities of daily living, grasping is one of the most recurrent. Our aim is to incorporate the detection of grasps in the machine-mediated rehabilitation framework so that they can be incorporated into interactive therapeutic games. In this study, we developed and tested a method based on support vector machines for recognizing various grasp postures wearing a passive exoskeleton for hand and wrist rehabilitation after stroke. The experiment was conducted with ten healthy subjects and eight stroke patients performing the grasping gestures. The method was tested in terms of accuracy and robustness with respect to intersubjects' variability and differences between different grasps. Our results show reliable recognition while also indicating that the recognition accuracy can be used to assess the patients' ability to consistently repeat the gestures. Additionally, a grasp quality measure was proposed to measure the capabilities of the stroke patients to perform grasp postures in a similar way than healthy people. These two measures can be potentially used as complementary measures to other upper limb motion tests.Peer reviewedFinal Published versio

    Learning to Assist Bimanual Teleoperation using Interval Type-2 Polynomial Fuzzy Inference

    Get PDF
    Assisting humans in collaborative tasks is a promising application for robots, however effective assistance remains challenging. In this paper, we propose a method for providing intuitive robotic assistance based on learning from human natural limb coordination. To encode coupling between multiple-limb motions, we use a novel interval type-2 (IT2) polynomial fuzzy inference for modeling trajectory adaptation. The associated polynomial coefficients are estimated using a modified recursive least-square with a dynamic forgetting factor. We propose to employ a Gaussian process to produce robust human motion predictions, and thus address the uncertainty and measurement noise of the system caused by interactive environments. Experimental results on two types of interaction tasks demonstrate the effectiveness of this approach, which achieves high accuracy in predicting assistive limb motion and enables humans to perform bimanual tasks using only one limb
    corecore