22,584 research outputs found

    Understanding of Object Manipulation Actions Using Human Multi-Modal Sensory Data

    Full text link
    Object manipulation actions represent an important share of the Activities of Daily Living (ADLs). In this work, we study how to enable service robots to use human multi-modal data to understand object manipulation actions, and how they can recognize such actions when humans perform them during human-robot collaboration tasks. The multi-modal data in this study consists of videos, hand motion data, applied forces as represented by the pressure patterns on the hand, and measurements of the bending of the fingers, collected as human subjects performed manipulation actions. We investigate two different approaches. In the first one, we show that multi-modal signal (motion, finger bending and hand pressure) generated by the action can be decomposed into a set of primitives that can be seen as its building blocks. These primitives are used to define 24 multi-modal primitive features. The primitive features can in turn be used as an abstract representation of the multi-modal signal and employed for action recognition. In the latter approach, the visual features are extracted from the data using a pre-trained image classification deep convolutional neural network. The visual features are subsequently used to train the classifier. We also investigate whether adding data from other modalities produces a statistically significant improvement in the classifier performance. We show that both approaches produce a comparable performance. This implies that image-based methods can successfully recognize human actions during human-robot collaboration. On the other hand, in order to provide training data for the robot so it can learn how to perform object manipulation actions, multi-modal data provides a better alternative

    On Neuromechanical Approaches for the Study of Biological Grasp and Manipulation

    Full text link
    Biological and robotic grasp and manipulation are undeniably similar at the level of mechanical task performance. However, their underlying fundamental biological vs. engineering mechanisms are, by definition, dramatically different and can even be antithetical. Even our approach to each is diametrically opposite: inductive science for the study of biological systems vs. engineering synthesis for the design and construction of robotic systems. The past 20 years have seen several conceptual advances in both fields and the quest to unify them. Chief among them is the reluctant recognition that their underlying fundamental mechanisms may actually share limited common ground, while exhibiting many fundamental differences. This recognition is particularly liberating because it allows us to resolve and move beyond multiple paradoxes and contradictions that arose from the initial reasonable assumption of a large common ground. Here, we begin by introducing the perspective of neuromechanics, which emphasizes that real-world behavior emerges from the intimate interactions among the physical structure of the system, the mechanical requirements of a task, the feasible neural control actions to produce it, and the ability of the neuromuscular system to adapt through interactions with the environment. This allows us to articulate a succinct overview of a few salient conceptual paradoxes and contradictions regarding under-determined vs. over-determined mechanics, under- vs. over-actuated control, prescribed vs. emergent function, learning vs. implementation vs. adaptation, prescriptive vs. descriptive synergies, and optimal vs. habitual performance. We conclude by presenting open questions and suggesting directions for future research. We hope this frank assessment of the state-of-the-art will encourage and guide these communities to continue to interact and make progress in these important areas

    Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”This position paper proposes that the study of embodied cognitive agents, such as humanoid robots, can advance our understanding of the cognitive development of complex sensorimotor, linguistic, and social learning skills. This in turn will benefit the design of cognitive robots capable of learning to handle and manipulate objects and tools autonomously, to cooperate and communicate with other robots and humans, and to adapt their abilities to changing internal, environmental, and social conditions. Four key areas of research challenges are discussed, specifically for the issues related to the understanding of: 1) how agents learn and represent compositional actions; 2) how agents learn and represent compositional lexica; 3) the dynamics of social interaction and learning; and 4) how compositional action and language representations are integrated to bootstrap the cognitive system. The review of specific issues and progress in these areas is then translated into a practical roadmap based on a series of milestones. These milestones provide a possible set of cognitive robotics goals and test scenarios, thus acting as a research roadmap for future work on cognitive developmental robotics.Peer reviewe

    A Neural Circuit Model for Prospective Control of Interceptive Reaching

    Full text link
    Two prospective controllers of hand movements in catching -- both based on required velocity control -- were simulated. Under certain conditions, this required velocity controlled to overshoots of the future interception point. These overshoots were absent in pertinent experiments. To remedy this shortcoming, the required velocity model was reformulated in terms of a neural network, the Vector Integration To Endpoint model, to create a Required Velocity Integration To Endpoint modeL Addition of a parallel relative velocity channel, resulting in the Relative and Required Velocity Integration To Endpoint model, provided a better account for the experimentally observed kinematics than the existing, purely behavioral models. Simulations of reaching to intercept decelerating and accelerating objects in the presence of background motion were performed to make distinct predictions for future experiments.Vrije Universiteit (Gerrit-Jan van Jngen-Schenau stipend of the Faculty of Human Movement Sciences); Royal Netherlands Academy of Arts and Sciences; Defense Advanced Research Projects Agency and Office of Naval Research (N00014-95-1-0409

    Evaluating rules of interaction for object manipulation in cluttered virtual environments

    Get PDF
    A set of rules is presented for the design of interfaces that allow virtual objects to be manipulated in 3D virtual environments (VEs). The rules differ from other interaction techniques because they focus on the problems of manipulating objects in cluttered spaces rather than open spaces. Two experiments are described that were used to evaluate the effect of different interaction rules on participants' performance when they performed a task known as "the piano mover's problem." This task involved participants in moving a virtual human through parts of a virtual building while simultaneously manipulating a large virtual object that was held in the virtual human's hands, resembling the simulation of manual materials handling in a VE for ergonomic design. Throughout, participants viewed the VE on a large monitor, using an "over-the-shoulder" perspective. In the most cluttered VEs, the time that participants took to complete the task varied by up to 76% with different combinations of rules, thus indicating the need for flexible forms of interaction in such environments

    A review and consideration on the kinematics of reach-to-grasp movements in macaque monkeys

    Get PDF
    The bases for understanding the neuronal mechanisms that underlie the control of reach-to-grasp movements among nonhuman primates, particularly macaques, has been widely studied. However, only a few kinematic descriptions of their prehensile actions are available. A thorough understanding of macaques' prehensile movements is manifestly critical, in light of their role in biomedical research as valuable models for studying neuromotor disorders and brain mechanisms, as well as for developing brain-machine interfaces to facilitate arm control. This article aims to review the current state of knowledge on the kinematics of grasping movements that macaques perform in naturalistic, semi-naturalistic, and laboratory settings, to answer the following questions: Are kinematic signatures affected by the context within which the movement is performed? In what ways is kinematics of humans' and macaques' prehensile actions similar/dissimilar? Our analysis reflects the challenges involved in making comparisons across settings and species due to the heterogeneous picture in terms of the number of subjects, stimuli, conditions, and hands used. The kinematics of free-ranging macaques are characterized by distinctive features that are exhibited neither by macaques in laboratory setting nor human subjects. The temporal incidence of key kinematic landmarks diverges significantly between species, indicating disparities in the overall organization of movement. Given such complexities, we attempt a synthesis of extant body of evidence, intending to generate some significant implications for directions that future research might take, to recognize the remaining gaps and pursue the insights and resolutions to generate an interpretation of movement kinematics that accounts for all settings and subjects

    The Mechanics of Embodiment: A Dialogue on Embodiment and Computational Modeling

    Get PDF
    Embodied theories are increasingly challenging traditional views of cognition by arguing that conceptual representations that constitute our knowledge are grounded in sensory and motor experiences, and processed at this sensorimotor level, rather than being represented and processed abstractly in an amodal conceptual system. Given the established empirical foundation, and the relatively underspecified theories to date, many researchers are extremely interested in embodied cognition but are clamouring for more mechanistic implementations. What is needed at this stage is a push toward explicit computational models that implement sensory-motor grounding as intrinsic to cognitive processes. In this article, six authors from varying backgrounds and approaches address issues concerning the construction of embodied computational models, and illustrate what they view as the critical current and next steps toward mechanistic theories of embodiment. The first part has the form of a dialogue between two fictional characters: Ernest, the �experimenter�, and Mary, the �computational modeller�. The dialogue consists of an interactive sequence of questions, requests for clarification, challenges, and (tentative) answers, and touches the most important aspects of grounded theories that should inform computational modeling and, conversely, the impact that computational modeling could have on embodied theories. The second part of the article discusses the most important open challenges for embodied computational modelling

    Tele-operated high speed anthropomorphic dextrous hands with object shape and texture identification

    Get PDF
    This paper reports on the development of two number of robotic hands have been developed which focus on tele-operated high speed anthropomorphic dextrous robotic hands. The aim of developing these hands was to achieve a system that seamlessly interfaced between humans and robots. To provide sensory feedback, to a remote operator tactile sensors were developed to be mounted on the robotic hands. Two systems were developed, the first, being a skin sensor capable of shape reconstruction placed on the palm of the hand to feed back the shape of objects grasped and the second is a highly sensitive tactile array for surface texture identification
    • …
    corecore