234 research outputs found
Recovering from External Disturbances in Online Manipulation through State-Dependent Revertive Recovery Policies
Robots are increasingly entering uncertain and unstructured environments.
Within these, robots are bound to face unexpected external disturbances like
accidental human or tool collisions. Robots must develop the capacity to
respond to unexpected events. That is not only identifying the sudden anomaly,
but also deciding how to handle it. In this work, we contribute a recovery
policy that allows a robot to recovery from various anomalous scenarios across
different tasks and conditions in a consistent and robust fashion. The system
organizes tasks as a sequence of nodes composed of internal modules such as
motion generation and introspection. When an introspection module flags an
anomaly, the recovery strategy is triggered and reverts the task execution by
selecting a target node as a function of a state dependency chart. The new
skill allows the robot to overcome the effects of the external disturbance and
conclude the task. Our system recovers from accidental human and tool
collisions in a number of tasks. Of particular importance is the fact that we
test the robustness of the recovery system by triggering anomalies at each node
in the task graph showing robust recovery everywhere in the task. We also
trigger multiple and repeated anomalies at each of the nodes of the task
showing that the recovery system can consistently recover anywhere in the
presence of strong and pervasive anomalous conditions. Robust recovery systems
will be key enablers for long-term autonomy in robot systems. Supplemental info
including code, data, graphs, and result analysis can be found at [1].Comment: 8 pages, 8 figures, 1 tabl
Data-Driven Grasp Synthesis - A Survey
We review the work on data-driven grasp synthesis and the methodologies for
sampling and ranking candidate grasps. We divide the approaches into three
groups based on whether they synthesize grasps for known, familiar or unknown
objects. This structure allows us to identify common object representations and
perceptual processes that facilitate the employed data-driven grasp synthesis
technique. In the case of known objects, we concentrate on the approaches that
are based on object recognition and pose estimation. In the case of familiar
objects, the techniques use some form of a similarity matching to a set of
previously encountered objects. Finally for the approaches dealing with unknown
objects, the core part is the extraction of specific features that are
indicative of good grasps. Our survey provides an overview of the different
methodologies and discusses open problems in the area of robot grasping. We
also draw a parallel to the classical approaches that rely on analytic
formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic
The Meaning of Action:a review on action recognition and mapping
In this paper, we analyze the different approaches taken to date within the computer vision, robotics and artificial intelligence communities for the representation, recognition, synthesis and understanding of action. We deal with action at different levels of complexity and provide the reader with the necessary related literature references. We put the literature references further into context and outline a possible interpretation of action by taking into account the different aspects of action recognition, action synthesis and task-level planning
The Development of Bio-Inspired Cortical Feature Maps for Robot Sensorimotor Controllers
Full version unavailable due to 3rd party copyright restrictions.This project applies principles from the field of Computational Neuroscience to Robotics research, in particular to develop systems inspired by how nature manages to solve sensorimotor coordination tasks. The overall aim has been to build a self-organising sensorimotor system using biologically inspired techniques based upon human cortical development which can in the future be implemented in neuromorphic hardware. This can then deliver the benefits of low power consumption and real time operation but with flexible learning onboard autonomous robots. A core principle is the Self-Organising Feature Map which is based upon the theory of how 2D maps develop in real cortex to represent complex information from the environment. A framework for developing feature maps for both motor and visual directional selectivity representing eight different directions of motion is described as well as how they can be coupled together to make a basic visuomotor system. In contrast to many previous works which use artificially generated visual inputs (for example, image sequences of oriented moving bars or mathematically generated Gaussian bars) a novel feature of the current work is that the visual input is generated by a DVS 128 silicon retina camera which is a neuromorphic device and produces spike events in a frame-free way. One of the main contributions of this work has been to develop a method of autonomous regulation of the map development process which adapts the learning dependent upon input activity. The main results show that distinct directionally selective maps for both the motor and visual modalities are produced under a range of experimental scenarios. The adaptive learning process successfully controls the rate of learning in both motor and visual map development and is used to indicate when sufficient patterns have been presented, thus avoiding the need to define in advance the quantity and range of training data. The coupling training experiments show that the visual input learns to modulate the original motor map response, creating a new visual-motor topological map.EPSRC, University of Plymouth Graduate Schoo
- …