2,423 research outputs found

    Data-Driven Grasp Synthesis - A Survey

    Full text link
    We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Using haptic feedback in human swarm interaction

    Get PDF
    A swarm of robots is a large group of individual agents that autonomously coordinate via local control laws. Their emergent behavior allows simple robots to accomplish complex tasks. Since missions may have complex objectives that change dynamically due to environmental and mission changes, human control and influence over the swarm is needed. The field of Human Swarm Interaction (HSI) is young, with few user studies, and even fewer papers focusing on giving non-visual feedback to the operator. The authors will herein present a background of haptics in robotics and swarms and two studies that explore various conditions under which haptic feedback may be useful in HSI. The overall goal of the studies is to explore the effectiveness of haptic feedback in the presence of other visual stimuli about the swarm system. The findings show that giving feedback about nearby obstacles using a haptic device can improve performance, and that a combination of feedback from obstacle forces via the visual and haptic channels provide the best performance

    Action in Mind: Neural Models for Action and Intention Perception

    Get PDF
    To notice, recognize, and ultimately perceive the others’ actions and to discern the intention behind those observed actions is an essential skill for social communications and improves markedly the chances of survival. Encountering dangerous behavior, for instance, from a person or an animal requires an immediate and suitable reaction. In addition, as social creatures, we need to perceive, interpret, and judge correctly the other individual’s actions as a fundamental skill for our social life. In other words, our survival and success in adaptive social behavior and nonverbal communication depends heavily on our ability to thrive in complex social situations. However, it has been shown that humans spontaneously can decode animacy and social interactions even from strongly impoverished stimuli and this is a fundamental part of human experience that develops early in infancy and is shared with other primates. In addition, it is well established that perceptual and motor representations of actions are tightly coupled and both share common mechanisms. This coupling between action perception and action execution plays a critical role in action understanding as postulated in various studies and they are potentially important for our social cognition. This interaction likely is mediated by action-selective neurons in the superior temporal sulcus (STS), premotor and parietal cortex. STS and TPJ have been identified also as coarse neural substrate for the processing of social interactions stimuli. Despite this localization, the underlying exact neural circuits of this processing remain unclear. The aim of this thesis is to understand the neural mechanisms behind the action perception coupling and to investigate further how human brain perceive different classes of social interactions. To achieve this goal, first we introduce a neural model that provides a unifying account for multiple experiments on the interaction between action execution and action perception. The model reproduces correctly the interactions between action observation and execution in several experiments and provides a link towards electrophysiological detailed models of relevant circuits. This model might thus provide a starting point for the detailed quantitative investigation how motor plans interact with perceptual action representations at the level of single-cell mechanisms. Second we present a simple neural model that reproduces some of the key observations in psychophysical experiments about the perception of animacy and social interactions from stimuli. Even in its simple form the model proves that animacy and social interaction judgments partly might be derived by very elementary operations in hierarchical neural vision systems, without a need of sophisticated or accurate probabilistic inference

    Human Motor Control and the Design and Control of Backdriveable Actuators for Human-Robot Interaction

    Full text link
    The design of the control and hardware systems for a robot intended for interaction with a human user can profit from a critical analysis of the human neuromotor system and human biomechanics. The primary observation to be made about the human control and ``hardware’’ systems is that they work well together, perhaps because they were designed for each other. Despite the limited force production and elasticity of muscle, and despite slow information transmission, the sensorimotor system is adept at an impressive range of motor behaviors. In this thesis I present three explorations on the manners in which the human and hardware systems work together, hoping to inform the design of robots suitable for human-robot interaction. First, I used the serial reaction time (SRT) task with cuing from lights and motorized keys to assess the relative contribution of visual and haptic stimuli to the formation of motor and perceptual memories. Motorized keys were used to deliver brief pulse-like displacements to the resting fingers, with the expectation that the proximity and similarity of these cues to the response motor actions (finger-activated key-presses) would strengthen the motor memory trace in particular. Error rate results demonstrate that haptic cues promote motor learning over perceptual learning. The second exploration involves the design of an actuator specialized for human-robot interaction. Like muscle, it features series elasticity and thus displays good backdrivability. The elasticity arises from the use of a compressible fluid while hinged rigid plates are used to convert fluid power into mechanical power. I also propose impedance control with dynamics compensation to further reduce the driving-point impedance. The controller is robust to all kinds of uncertainties. The third exploration involves human control in interaction with the environment. I propose a framework that accommodates delays and does not require an explicit model of the musculoskeletal system and environment. Instead, loads from the biomechanics and coupled environment are estimated using the relationship between the motor command and its responses. Delays inherent in sensory feedback are accommodated by taking the form of the Smith predictor. Agreements between simulation results and empirical movements suggests that the framework is viable.PhDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120675/1/gloryn_1.pd
    • …
    corecore