3,420 research outputs found

    A dynamic model for real-time tracking of hands in bimanual movements

    Get PDF
    The problem of hand tracking in the presence of occlusion is addressed. In bimanual movements the hands tend to be synchronised effortlessly. Different aspects of this synchronisation are the basis of our research to track the hands. The spatial synchronisation in bimanual movements is modelled by the position and the temporal synchronisation by the velocity and acceleration of each hand. Based on a dynamic model, we introduce algorithms for occlusion detection and hand tracking

    Hand tracking and bimanual movement understanding

    Get PDF
    Bimanual movements are a subset ot human movements in which the two hands move together in order to do a task or imply a meaning A bimanual movement appearing in a sequence of images must be understood in order to enable computers to interact with humans in a natural way This problem includes two main phases, hand tracking and movement recognition. We approach the problem of hand tracking from a neuroscience point ot view First the hands are extracted and labelled by colour detection and blob analysis algorithms In the presence of the two hands one hand may occlude the other occasionally Therefore, hand occlusions must be detected in an image sequence A dynamic model is proposed to model the movement of each hand separately Using this model in a Kalman filtering proccss the exact starting and end points of hand occlusions are detected We exploit neuroscience phenomena to understand the beha\ tour of the hands during occlusion periods Based on this, we propose a general hand tracking algorithm to track and reacquire the hands over a movement including hand occlusion The advantages of the algorithm and its generality are demonstrated in the experiments. In order to recognise the movements first we recognise the movement of a hand Using statistical pattern recognition methods (such as Principal Component Analysis and Nearest Neighbour) the static shape of each hand appearing in an image is recognised A Graph- Matching algorithm and Discrete Midden Markov Models (DHMM) as two spatio-temporal pattern recognition techniques are imestigated tor recognising a dynamic hand gesture For recognising bimanual movements we consider two general forms ot these movements, single and concatenated periodic We introduce three Bayesian networks for recognising die movements The networks are designed to recognise and combinc the gestures of the hands in order to understand the whole movement Experiments on different types ot movement demonstrate the advantages and disadvantages of each network

    Bayesian fusion of hidden Markov models for understanding bimanual movements

    Get PDF
    Understanding hand and body gestures is a part of a wide spectrum of current research in computer vision and human-computer interaction. A part of this can be the recognition of movements in which the two hands move simultaneously to do something or imply a meaning. We present a Bayesian network for fusing hidden Markov models in order to recognise a bimanual movement. A bimanual movement is tracked and segmented by a tracking algorithm. Hidden Markov models are assigned to the segments in order to learn and recognize the partial movement within each segment. A Bayesian network fuses the HMMs in order to perceive the movement of the two hands as a single entity

    Proprioceptive perception of phase variability

    Get PDF
    Previous work has established that judgments of relative phase variability of 2 visually presented oscillators covary with mean relative phase. Ninety degrees is judged to be more variable than 0° or 180°, independently of the actual level of phase variability. Judged levels of variability also increase at 180°. This pattern of judgments matches the pattern of movement coordination results. Here, participants judged the phase variability of their own finger movements, which they generated by actively tracking a manipulandum moving at 0°, 90°, or 180°, and with 1 of 4 levels of Phase Variability. Judgments covaried as an inverted U-shaped function of mean relative phase. With an increase in frequency, 180° was judged more variable whereas 0° was not. Higher frequency also reduced discrimination of the levels of Phase Variability. This matching of the proprioceptive and visual results, and of both to movement results, supports the hypothesized role of online perception in the coupling of limb movements. Differences in the 2 cases are discussed as due primarily to the different sensitivities of the systems to the information

    Learning Task Priorities from Demonstrations

    Full text link
    Bimanual operations in humanoids offer the possibility to carry out more than one manipulation task at the same time, which in turn introduces the problem of task prioritization. We address this problem from a learning from demonstration perspective, by extending the Task-Parameterized Gaussian Mixture Model (TP-GMM) to Jacobian and null space structures. The proposed approach is tested on bimanual skills but can be applied in any scenario where the prioritization between potentially conflicting tasks needs to be learned. We evaluate the proposed framework in: two different tasks with humanoids requiring the learning of priorities and a loco-manipulation scenario, showing that the approach can be exploited to learn the prioritization of multiple tasks in parallel.Comment: Accepted for publication at the IEEE Transactions on Robotic

    Interactive form creation: exploring the creation and manipulation of free form through the use of interactive multiple input interface

    Get PDF
    Most current CAD systems support only the two most common input devices: a mouse and a keyboard that impose a limit to the degree of interaction that a user can have with the system. However, it is not uncommon for users to work together on the same computer during a collaborative task. Beside that, people tend to use both hands to manipulate 3D objects; one hand is used to orient the object while the other hand is used to perform some operation on the object. The same things could be applied to computer modelling in the conceptual phase of the design process. A designer can rotate and position an object with one hand, and manipulate the shape [deform it] with the other hand. Accordingly, the 3D object can be easily and intuitively changed through interactive manipulation of both hands.The research investigates the manipulation and creation of free form geometries through the use of interactive interfaces with multiple input devices. First the creation of the 3D model will be discussed; several different types of models will be illustrated. Furthermore, different tools that allow the user to control the 3D model interactively will be presented. Three experiments were conducted using different interactive interfaces; two bi-manual techniques were compared with the conventional one-handed approach. Finally it will be demonstrated that the use of new and multiple input devices can offer many opportunities for form creation. The problem is that few, if any, systems make it easy for the user or the programmer to use new input devices

    Joint action goals reduce visuomotor interference effects from a partner’s incongruent actions

    Get PDF
    Joint actions often require agents to track others’ actions while planning and executing physically incongruent actions of their own. Previous research has indicated that this can lead to visuomotor interference effects when it occurs outside of joint action. How is this avoided or overcome in joint actions? We hypothesized that when joint action partners represent their actions as interrelated components of a plan to bring about a joint action goal, each partner’s movements need not be represented in relation to distinct, incongruent proximal goals. Instead they can be represented in relation to a single proximal goal – especially if the movements are, or appear to be, mechanically linked to a more distal joint action goal. To test this, we implemented a paradigm in which participants produced finger movements that were either congruent or incongruent with those of a virtual partner, and either with or without a joint action goal (the joint flipping of a switch, which turned on two light bulbs). Our findings provide partial support for the hypothesis that visuomotor interference effects can be reduced when two physically incongruent actions are represented as mechanically interdependent contributions to a joint action goal

    A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts

    Full text link
    This paper presents a multi-robot system for manufacturing personalized medical stent grafts. The proposed system adopts a modular design, which includes: a (personalized) mandrel module, a bimanual sewing module, and a vision module. The mandrel module incorporates the personalized geometry of patients, while the bimanual sewing module adopts a learning-by-demonstration approach to transfer human hand-sewing skills to the robots. The human demonstrations were firstly observed by the vision module and then encoded using a statistical model to generate the reference motion trajectories. During autonomous robot sewing, the vision module plays the role of coordinating multi-robot collaboration. Experiment results show that the robots can adapt to generalized stent designs. The proposed system can also be used for other manipulation tasks, especially for flexible production of customized products and where bimanual or multi-robot cooperation is required.Comment: 10 pages, 12 figures, accepted by IEEE Transactions on Industrial Informatics, Key words: modularity, medical device customization, multi-robot system, robot learning, visual servoing, robot sewin

    A Multi-Robot Cooperation Framework for Sewing Personalized Stent Grafts

    Get PDF
    This paper presents a multi-robot system for manufacturing personalized medical stent grafts. The proposed system adopts a modular design, which includes: a (personalized) mandrel module, a bimanual sewing module, and a vision module. The mandrel module incorporates the personalized geometry of patients, while the bimanual sewing module adopts a learning-by-demonstration approach to transfer human hand-sewing skills to the robots. The human demonstrations were firstly observed by the vision module and then encoded using a statistical model to generate the reference motion trajectories. During autonomous robot sewing, the vision module plays the role of coordinating multi-robot collaboration. Experiment results show that the robots can adapt to generalized stent designs. The proposed system can also be used for other manipulation tasks, especially for flexible production of customized products and where bimanual or multi-robot cooperation is required.Comment: 10 pages, 12 figures, accepted by IEEE Transactions on Industrial Informatics, Key words: modularity, medical device customization, multi-robot system, robot learning, visual servoing, robot sewin
    • 

    corecore