52,102 research outputs found

    Port-Hamiltonian modeling for soft-finger manipulation

    Get PDF
    In this paper, we present a port-Hamiltonian model of a multi-fingered robotic hand, with soft-pads, while grasping and manipulating an object. The algebraic constraints of the interconnected systems are represented by a geometric object, called Dirac structure. This provides a powerful way to describe the non-contact to contact transition and contact viscoelasticity, by using the concepts of energy flows and power preserving interconnections. Using the port based model, an Intrinsically Passive Controller (IPC) is used to control the internal forces. Simulation results validate the model and demonstrate the effectiveness of the port-based approach

    On Neuromechanical Approaches for the Study of Biological Grasp and Manipulation

    Full text link
    Biological and robotic grasp and manipulation are undeniably similar at the level of mechanical task performance. However, their underlying fundamental biological vs. engineering mechanisms are, by definition, dramatically different and can even be antithetical. Even our approach to each is diametrically opposite: inductive science for the study of biological systems vs. engineering synthesis for the design and construction of robotic systems. The past 20 years have seen several conceptual advances in both fields and the quest to unify them. Chief among them is the reluctant recognition that their underlying fundamental mechanisms may actually share limited common ground, while exhibiting many fundamental differences. This recognition is particularly liberating because it allows us to resolve and move beyond multiple paradoxes and contradictions that arose from the initial reasonable assumption of a large common ground. Here, we begin by introducing the perspective of neuromechanics, which emphasizes that real-world behavior emerges from the intimate interactions among the physical structure of the system, the mechanical requirements of a task, the feasible neural control actions to produce it, and the ability of the neuromuscular system to adapt through interactions with the environment. This allows us to articulate a succinct overview of a few salient conceptual paradoxes and contradictions regarding under-determined vs. over-determined mechanics, under- vs. over-actuated control, prescribed vs. emergent function, learning vs. implementation vs. adaptation, prescriptive vs. descriptive synergies, and optimal vs. habitual performance. We conclude by presenting open questions and suggesting directions for future research. We hope this frank assessment of the state-of-the-art will encourage and guide these communities to continue to interact and make progress in these important areas

    Design and Evaluation of Menu Systems for Immersive Virtual Environments

    Get PDF
    Interfaces for system control tasks in virtual environments (VEs) have not been extensively studied. This paper focuses on various types of menu systems to be used in such environments. We describe the design of the TULIP menu, a menu system using Pinch Gloves™, and compare it to two common alternatives: floating menus and pen and tablet menus. These three menus were compared in an empirical evaluation. The pen and tablet menu was found to be significantly faster, while users had a preference for TULIP. Subjective discomfort levels were also higher with the floating menus and pen and tablet

    Stable Prehensile Pushing: In-Hand Manipulation with Alternating Sticking Contacts

    Full text link
    This paper presents an approach to in-hand manipulation planning that exploits the mechanics of alternating sticking contact. Particularly, we consider the problem of manipulating a grasped object using external pushes for which the pusher sticks to the object. Given the physical properties of the object, frictional coefficients at contacts and a desired regrasp on the object, we propose a sampling-based planning framework that builds a pushing strategy concatenating different feasible stable pushes to achieve the desired regrasp. An efficient dynamics formulation allows us to plan in-hand manipulations 100-1000 times faster than our previous work which builds upon a complementarity formulation. Experimental observations for the generated plans show that the object precisely moves in the grasp as expected by the planner. Video Summary -- youtu.be/qOTKRJMx6HoComment: IEEE International Conference on Robotics and Automation 201

    Understanding of Object Manipulation Actions Using Human Multi-Modal Sensory Data

    Full text link
    Object manipulation actions represent an important share of the Activities of Daily Living (ADLs). In this work, we study how to enable service robots to use human multi-modal data to understand object manipulation actions, and how they can recognize such actions when humans perform them during human-robot collaboration tasks. The multi-modal data in this study consists of videos, hand motion data, applied forces as represented by the pressure patterns on the hand, and measurements of the bending of the fingers, collected as human subjects performed manipulation actions. We investigate two different approaches. In the first one, we show that multi-modal signal (motion, finger bending and hand pressure) generated by the action can be decomposed into a set of primitives that can be seen as its building blocks. These primitives are used to define 24 multi-modal primitive features. The primitive features can in turn be used as an abstract representation of the multi-modal signal and employed for action recognition. In the latter approach, the visual features are extracted from the data using a pre-trained image classification deep convolutional neural network. The visual features are subsequently used to train the classifier. We also investigate whether adding data from other modalities produces a statistically significant improvement in the classifier performance. We show that both approaches produce a comparable performance. This implies that image-based methods can successfully recognize human actions during human-robot collaboration. On the other hand, in order to provide training data for the robot so it can learn how to perform object manipulation actions, multi-modal data provides a better alternative
    corecore