190 research outputs found

    Integrating Vision and Physical Interaction for Discovery, Segmentation and Grasping of Unknown Objects

    Get PDF
    In dieser Arbeit werden Verfahren der Bildverarbeitung und die Fähigkeit humanoider Roboter, mit ihrer Umgebung physisch zu interagieren, in engem Zusammenspiel eingesetzt, um unbekannte Objekte zu identifizieren, sie vom Hintergrund und anderen Objekten zu trennen, und letztendlich zu greifen. Im Verlauf dieser interaktiven Exploration werden außerdem Eigenschaften des Objektes wie etwa sein Aussehen und seine Form ermittelt

    Supervised Autonomous Locomotion and Manipulation for Disaster Response with a Centaur-like Robot

    Full text link
    Mobile manipulation tasks are one of the key challenges in the field of search and rescue (SAR) robotics requiring robots with flexible locomotion and manipulation abilities. Since the tasks are mostly unknown in advance, the robot has to adapt to a wide variety of terrains and workspaces during a mission. The centaur-like robot Centauro has a hybrid legged-wheeled base and an anthropomorphic upper body to carry out complex tasks in environments too dangerous for humans. Due to its high number of degrees of freedom, controlling the robot with direct teleoperation approaches is challenging and exhausting. Supervised autonomy approaches are promising to increase quality and speed of control while keeping the flexibility to solve unknown tasks. We developed a set of operator assistance functionalities with different levels of autonomy to control the robot for challenging locomotion and manipulation tasks. The integrated system was evaluated in disaster response scenarios and showed promising performance.Comment: In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 201

    Learning and Acting in Peripersonal Space: Moving, Reaching, and Grasping

    Get PDF
    The young infant explores its body, its sensorimotor system, and the immediately accessible parts of its environment, over the course of a few months creating a model of peripersonal space useful for reaching and grasping objects around it. Drawing on constraints from the empirical literature on infant behavior, we present a preliminary computational model of this learning process, implemented and evaluated on a physical robot. The learning agent explores the relationship between the configuration space of the arm, sensing joint angles through proprioception, and its visual perceptions of the hand and grippers. The resulting knowledge is represented as the peripersonal space (PPS) graph, where nodes represent states of the arm, edges represent safe movements, and paths represent safe trajectories from one pose to another. In our model, the learning process is driven by intrinsic motivation. When repeatedly performing an action, the agent learns the typical result, but also detects unusual outcomes, and is motivated to learn how to make those unusual results reliable. Arm motions typically leave the static background unchanged, but occasionally bump an object, changing its static position. The reach action is learned as a reliable way to bump and move an object in the environment. Similarly, once a reliable reach action is learned, it typically makes a quasi-static change in the environment, moving an object from one static position to another. The unusual outcome is that the object is accidentally grasped (thanks to the innate Palmar reflex), and thereafter moves dynamically with the hand. Learning to make grasps reliable is more complex than for reaches, but we demonstrate significant progress. Our current results are steps toward autonomous sensorimotor learning of motion, reaching, and grasping in peripersonal space, based on unguided exploration and intrinsic motivation.Comment: 35 pages, 13 figure

    Toward Robots with Peripersonal Space Representation for Adaptive Behaviors

    Get PDF
    The abilities to adapt and act autonomously in an unstructured and human-oriented environment are necessarily vital for the next generation of robots, which aim to safely cooperate with humans. While this adaptability is natural and feasible for humans, it is still very complex and challenging for robots. Observations and findings from psychology and neuroscience in respect to the development of the human sensorimotor system can inform the development of novel approaches to adaptive robotics. Among these is the formation of the representation of space closely surrounding the body, the Peripersonal Space (PPS) , from multisensory sources like vision, hearing, touch and proprioception, which helps to facilitate human activities within their surroundings. Taking inspiration from the virtual safety margin formed by the PPS representation in humans, this thesis first constructs an equivalent model of the safety zone for each body part of the iCub humanoid robot. This PPS layer serves as a distributed collision predictor, which translates visually detected objects approaching a robot\u2019s body parts (e.g., arm, hand) into the probabilities of a collision between those objects and body parts. This leads to adaptive avoidance behaviors in the robot via an optimization-based reactive controller. Notably, this visual reactive control pipeline can also seamlessly incorporate tactile input to guarantee safety in both pre- and post-collision phases in physical Human-Robot Interaction (pHRI). Concurrently, the controller is also able to take into account multiple targets (of manipulation reaching tasks) generated by a multiple Cartesian point planner. All components, namely the PPS, the multi-target motion planner (for manipulation reaching tasks), the reaching-with-avoidance controller and the humancentred visual perception, are combined harmoniously to form a hybrid control framework designed to provide safety for robots\u2019 interactions in a cluttered environment shared with human partners. Later, motivated by the development of manipulation skills in infants, in which the multisensory integration is thought to play an important role, a learning framework is proposed to allow a robot to learn the processes of forming sensory representations, namely visuomotor and visuotactile, from their own motor activities in the environment. Both multisensory integration models are constructed with Deep Neural Networks (DNNs) in such a way that their outputs are represented in motor space to facilitate the robot\u2019s subsequent actions

    Visuomotor Coordination in Reach-To-Grasp Tasks: From Humans to Humanoids and Vice Versa

    Get PDF
    Understanding the principles involved in visually-based coordinated motor control is one of the most fundamental and most intriguing research problems across a number of areas, including psychology, neuroscience, computer vision and robotics. Not very much is known regarding computational functions that the central nervous system performs in order to provide a set of requirements for visually-driven reaching and grasping. Additionally, in spite of several decades of advances in the field, the abilities of humanoids to perform similar tasks are by far modest when needed to operate in unstructured and dynamically changing environments. More specifically, our first focus is understanding the principles involved in human visuomotor coordination. Not many behavioral studies considered visuomotor coordination in natural, unrestricted, head-free movements in complex scenarios such as obstacle avoidance. To fill this gap, we provide an assessment of visuomotor coordination when humans perform prehensile tasks with obstacle avoidance, an issue that has received far less attention. Namely, we quantify the relationships between the gaze and arm-hand systems, so as to inform robotic models, and we investigate how the presence of an obstacle modulates this pattern of correlations. Second, to complement these observations, we provide a robotic model of visuomotor coordination, with and without the presence of obstacles in the workspace. The parameters of the controller are solely estimated by using the human motion capture data from our human study. This controller has a number of interesting properties. It provides an efficient way to control the gaze, arm and hand movements in a stable and coordinated manner. When facing perturbations while reaching and grasping, our controller adapts its behavior almost instantly, while preserving coordination between the gaze, arm, and hand. In the third part of the thesis, we study the neuroscientific literature of the primates. We here stress the view that the cerebellum uses the cortical reference frame representation. The cerebellum by taking into account this representation performs closed-loop programming of multi-joint movements and movement synchronization between the eye-head system, arm and hand. Based on this investigation, we propose a functional architecture of the cerebellar-cortical involvement. We derive a number of improvements of our visuomotor controller for obstacle-free reaching and grasping. Because this model is devised by carefully taking into account the neuroscientific evidence, we are able to provide a number of testable predictions about the functions of the central nervous system in visuomotor coordination. Finally, we tackle the flow of the visuomotor coordination in the direction from the arm-hand system to the visual system. We develop two models of motor-primed attention for humanoid robots. Motor-priming of attention is a mechanism that implements prioritizing of visual processing with respect to motor-relevant parts of the visual field. Recent studies in humans and monkeys have shown that visual attention supporting natural behavior is not exclusively defined in terms of visual saliency in color or texture cues, rather the reachable space and motor plans present the predominant source of this attentional modulation. Here, we show that motor-priming of visual attention can be used to efficiently distribute robot's computational resources devoted to visual processing

    Manipulation primitives: A paradigm for abstraction and execution of grasping and manipulation tasks

    Get PDF
    Sensor-based reactive and hybrid approaches have proven a promising line of study to address imperfect knowledge in grasping and manipulation. However the reactive approaches are usually tightly coupled to a particular embodiment making transfer of knowledge difficult. This paper proposes a paradigm for modeling and execution of reactive manipulation actions, which makes knowledge transfer to different embodiments possible while retaining the reactive capabilities of the embodiments. The proposed approach extends the idea of control primitives coordinated by a state machine by introducing an embodiment independent layer of abstraction. Abstract manipulation primitives constitute a vocabulary of atomic, embodiment independent actions, which can be coordinated using state machines to describe complex actions. To obtain embodiment specific models, the abstract state machines are automatically translated to embodiment specific models, such that full capabilities of each platform can be utilized. The strength of the manipulation primitives paradigm is demonstrated by developing a set of corresponding embodiment specific primitives for object transport, including a complex reactive grasping primitive. The robustness of the approach is experimentally studied in emptying of a box filled with several unknown objects. The embodiment independence is studied by performing a manipulation task on two different platforms using the same abstract description

    Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling & Recovery

    Get PDF
    State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors. In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with \u27pick-and-place\u27 tasks in an ideal \u27Blocks World\u27 environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic \u27Object\u27 and \u27Location\u27 grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control

    A Continuous Grasp Representation for the Imitation Learning of Grasps on Humanoid Robots

    Get PDF
    Models and methods are presented which enable a humanoid robot to learn reusable, adaptive grasping skills. Mechanisms and principles in human grasp behavior are studied. The findings are used to develop a grasp representation capable of retaining specific motion characteristics and of adapting to different objects and tasks. Based on the representation a framework is proposed which enables the robot to observe human grasping, learn grasp representations, and infer executable grasping actions

    Cognitive Task Planning for Smart Industrial Robots

    Get PDF
    This research work presents a novel Cognitive Task Planning framework for Smart Industrial Robots. The framework makes an industrial mobile manipulator robot Cognitive by applying Semantic Web Technologies. It also introduces a novel Navigation Among Movable Obstacles algorithm for robots navigating and manipulating inside a firm. The objective of Industrie 4.0 is the creation of Smart Factories: modular firms provided with cyber-physical systems able to strong customize products under the condition of highly flexible mass-production. Such systems should real-time communicate and cooperate with each other and with humans via the Internet of Things. They should intelligently adapt to the changing surroundings and autonomously navigate inside a firm while moving obstacles that occlude free paths, even if seen for the first time. At the end, in order to accomplish all these tasks while being efficient, they should learn from their actions and from that of other agents. Most of existing industrial mobile robots navigate along pre-generated trajectories. They follow ectrified wires embedded in the ground or lines painted on th efloor. When there is no expectation of environment changes and cycle times are critical, this planning is functional. When workspaces and tasks change frequently, it is better to plan dynamically: robots should autonomously navigate without relying on modifications of their environments. Consider the human behavior: humans reason about the environment and consider the possibility of moving obstacles if a certain goal cannot be reached or if moving objects may significantly shorten the path to it. This problem is named Navigation Among Movable Obstacles and is mostly known in rescue robotics. This work transposes the problem on an industrial scenario and tries to deal with its two challenges: the high dimensionality of the state space and the treatment of uncertainty. The proposed NAMO algorithm aims to focus exploration on less explored areas. For this reason it extends the Kinodynamic Motion Planning by Interior-Exterior Cell Exploration algorithm. The extension does not impose obstacles avoidance: it assigns an importance to each cell by combining the efforts necessary to reach it and that needed to free it from obstacles. The obtained algorithm is scalable because of its independence from the size of the map and from the number, shape, and pose of obstacles. It does not impose restrictions on actions to be performed: the robot can both push and grasp every object. Currently, the algorithm assumes full world knowledge but the environment is reconfigurable and the algorithm can be easily extended in order to solve NAMO problems in unknown environments. The algorithm handles sensor feedbacks and corrects uncertainties. Usually Robotics separates Motion Planning and Manipulation problems. NAMO forces their combined processing by introducing the need of manipulating multiple objects, often unknown, while navigating. Adopting standard precomputed grasps is not sufficient to deal with the big amount of existing different objects. A Semantic Knowledge Framework is proposed in support of the proposed algorithm by giving robots the ability to learn to manipulate objects and disseminate the information gained during the fulfillment of tasks. The Framework is composed by an Ontology and an Engine. The Ontology extends the IEEE Standard Ontologies for Robotics and Automation and contains descriptions of learned manipulation tasks and detected objects. It is accessible from any robot connected to the Cloud. It can be considered a data store for the efficient and reliable execution of repetitive tasks; and a Web-based repository for the exchange of information between robots and for the speed up of the learning phase. No other manipulation ontology exists respecting the IEEE Standard and, regardless the standard, the proposed ontology differs from the existing ones because of the type of features saved and the efficient way in which they can be accessed: through a super fast Cascade Hashing algorithm. The Engine lets compute and store the manipulation actions when not present in the Ontology. It is based on Reinforcement Learning techniques that avoid massive trainings on large-scale databases and favors human-robot interactions. The overall system is flexible and easily adaptable to different robots operating in different industrial environments. It is characterized by a modular structure where each software block is completely reusable. Every block is based on the open-source Robot Operating System. Not all industrial robot controllers are designed to be ROS-compliant. This thesis presents the method adopted during this research in order to Open Industrial Robot Controllers and create a ROS-Industrial interface for them

    A Posture Sequence Learning System for an Anthropomorphic Robotic Hand

    Get PDF
    The paper presents a cognitive architecture for posture learning of an anthropomorphic robotic hand. Our approach is aimed to allow the robotic system to perform complex perceptual operations, to interact with a human user and to integrate the perceptions by a cognitive representation of the scene and the observed actions. The anthropomorphic robotic hand imitates the gestures acquired by the vision system in order to learn meaningful movements, to build its knowledge by different conceptual spaces and to perform complex interaction with the human operator
    corecore