393 research outputs found

    Muscleless Motor synergies and actions without movements : From Motor neuroscience to cognitive robotics

    Get PDF
    Emerging trends in neurosciences are providing converging evidence that cortical networks in predominantly motor areas are activated in several contexts related to ‘action’ that do not cause any overt movement. Indeed for any complex body, human or embodied robot inhabiting unstructured environments, the dual processes of shaping motor output during action execution and providing the self with information related to feasibility, consequence and understanding of potential actions (of oneself/others) must seamlessly alternate during goal-oriented behaviors, social interactions. While prominent approaches like Optimal Control, Active Inference converge on the role of forward models, they diverge on the underlying computational basis. In this context, revisiting older ideas from motor control like the Equilibrium Point Hypothesis and synergy formation, this article offers an alternative perspective emphasizing the functional role of a ‘plastic, configurable’ internal representation of the body (body-schema) as a critical link enabling the seamless continuum between motor control and imagery. With the central proposition that both “real and imagined” actions are consequences of an internal simulation process achieved though passive goal-oriented animation of the body schema, the computational/neural basis of muscleless motor synergies (and ensuing simulated actions without movements) is explored. The rationale behind this perspective is articulated in the context of several interdisciplinary studies in motor neurosciences (for example, intracranial depth recordings from the parietal cortex, FMRI studies highlighting a shared cortical basis for action ‘execution, imagination and understanding’), animal cognition (in particular, tool-use and neuro-rehabilitation experiments, revealing how coordinated tools are incorporated as an extension to the body schema) and pertinent challenges towards building cognitive robots that can seamlessly “act, interact, anticipate and understand” in unstructured natural living spaces

    Cognitive Reasoning for Compliant Robot Manipulation

    Get PDF
    Physically compliant contact is a major element for many tasks in everyday environments. A universal service robot that is utilized to collect leaves in a park, polish a workpiece, or clean solar panels requires the cognition and manipulation capabilities to facilitate such compliant interaction. Evolution equipped humans with advanced mental abilities to envision physical contact situations and their resulting outcome, dexterous motor skills to perform the actions accordingly, as well as a sense of quality to rate the outcome of the task. In order to achieve human-like performance, a robot must provide the necessary methods to represent, plan, execute, and interpret compliant manipulation tasks. This dissertation covers those four steps of reasoning in the concept of intelligent physical compliance. The contributions advance the capabilities of service robots by combining artificial intelligence reasoning methods and control strategies for compliant manipulation. A classification of manipulation tasks is conducted to identify the central research questions of the addressed topic. Novel representations are derived to describe the properties of physical interaction. Special attention is given to wiping tasks which are predominant in everyday environments. It is investigated how symbolic task descriptions can be translated into meaningful robot commands. A particle distribution model is used to plan goal-oriented wiping actions and predict the quality according to the anticipated result. The planned tool motions are converted into the joint space of the humanoid robot Rollin' Justin to perform the tasks in the real world. In order to execute the motions in a physically compliant fashion, a hierarchical whole-body impedance controller is integrated into the framework. The controller is automatically parameterized with respect to the requirements of the particular task. Haptic feedback is utilized to infer contact and interpret the performance semantically. Finally, the robot is able to compensate for possible disturbances as it plans additional recovery motions while effectively closing the cognitive control loop. Among others, the developed concept is applied in an actual space robotics mission, in which an astronaut aboard the International Space Station (ISS) commands Rollin' Justin to maintain a Martian solar panel farm in a mock-up environment. This application demonstrates the far-reaching impact of the proposed approach and the associated opportunities that emerge with the availability of cognition-enabled service robots

    Longterm Generalized Actions for Smart, Autonomous Robot Agents

    Get PDF
    Creating intelligent artificial systems, and in particular robots, that improve themselves just like humans do is one of the most ambitious goals in robotics and machine learning. The concept of robot experience exists for some time now, but has up to now not fully found its way into autonomous robots. This thesis is devoted to both, analyzing the underlying requirements for enabling robot learning from experience and actually implementing it on real robot hardware. For effective robot learning from experience I present and discuss three main requirements: (a ) Clearly expressing what a robot should do, on a vague, abstract level I introduce Generalized Plans as a means to express the intention rather than the actual action sequence of a task, removing as much task specific knowledge as possible. (a ) Defining, collecting, and analyzing robot experiences to enable robots to improve I present Episodic Memories as a container for all collected robot experiences for any arbitrary task and create sophisticated action (effect) prediction models from them, allowing robots to make better decisions. (a ) Properly abstracting from reality and dealing with failures in the domain they occurred in I propose failure handling strategies, a failure taxonomy extensible through experience, and discuss the relationship between symbolic/discrete and subsymbolic/continuous systems in terms of robot plans interacting with real world sensors and actuators. I concentrate on the domain of human-scale robot activities, specifically on doing household chores. Tasks in this domain offer many repeating patterns and are ideal candidates for abstracting, encapsulating, and modularizing robot plans into a more general form. This way, very similar plan structures are transformed into parameters that change the behavior of the robot while performing the task, making the plans more flexible. While performing tasks, robots encounter the same or similar situations over and over again. Albeit humans are able to benefit from this and improve at what they do, robots in general lack this ability. This thesis presents techniques for collecting and making robot experiences accessible to robots and outside observers alike, answering high level questions such as What are good spots to stand at for grasping objects from the fridge? or Which objects are especially difficult to grasp with two hands while they are in the oven? . By structuring and tapping into a robot's memory, it can make more informed decisions that are not based on manually encoded information, but self-improved behavior. To this end, I present several experience-based approaches to improve a robot's autonomous decisions, such as parameter choices, during execution time. Robots that interact with the real world are bound to deal with unexpected events and must properly react to failures of any kind of action. I present an extensible failure model that suits the structure of Generalized Plans and Episodic Memories and make clear how each module should deal with their own failures rather than directly handing them up to a governing cognitive architecture. In addition, I make a distinction between discrete parametrizations of Generalized Plans and continuous low level components, and how to translate between the two

    Medical SLAM in an autonomous robotic system

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This thesis addresses the ambitious goal of achieving surgical autonomy, through the study of the anatomical environment by Initially studying the technology present and what is needed to analyze the scene: vision sensors. A novel endoscope for autonomous surgical task execution is presented in the first part of this thesis. Which combines a standard stereo camera with a depth sensor. This solution introduces several key advantages, such as the possibility of reconstructing the 3D at a greater distance than traditional endoscopes. Then the problem of hand-eye calibration is tackled, which unites the vision system and the robot in a single reference system. Increasing the accuracy in the surgical work plan. In the second part of the thesis the problem of the 3D reconstruction and the algorithms currently in use were addressed. In MIS, simultaneous localization and mapping (SLAM) can be used to localize the pose of the endoscopic camera and build ta 3D model of the tissue surface. Another key element for MIS is to have real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy. Starting from the ORB-SLAM algorithm we have modified the architecture to make it usable in an anatomical environment by adding the registration of the pre-operative information of the intervention to the map obtained from the SLAM. Once it has been proven that the slam algorithm is usable in an anatomical environment, it has been improved by adding semantic segmentation to be able to distinguish dynamic features from static ones. All the results in this thesis are validated on training setups, which mimics some of the challenges of real surgery and on setups that simulate the human body within Autonomous Robotic Surgery (ARS) and Smart Autonomous Robotic Assistant Surgeon (SARAS) projects

    Medical SLAM in an autonomous robotic system

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This thesis addresses the ambitious goal of achieving surgical autonomy, through the study of the anatomical environment by Initially studying the technology present and what is needed to analyze the scene: vision sensors. A novel endoscope for autonomous surgical task execution is presented in the first part of this thesis. Which combines a standard stereo camera with a depth sensor. This solution introduces several key advantages, such as the possibility of reconstructing the 3D at a greater distance than traditional endoscopes. Then the problem of hand-eye calibration is tackled, which unites the vision system and the robot in a single reference system. Increasing the accuracy in the surgical work plan. In the second part of the thesis the problem of the 3D reconstruction and the algorithms currently in use were addressed. In MIS, simultaneous localization and mapping (SLAM) can be used to localize the pose of the endoscopic camera and build ta 3D model of the tissue surface. Another key element for MIS is to have real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy. Starting from the ORB-SLAM algorithm we have modified the architecture to make it usable in an anatomical environment by adding the registration of the pre-operative information of the intervention to the map obtained from the SLAM. Once it has been proven that the slam algorithm is usable in an anatomical environment, it has been improved by adding semantic segmentation to be able to distinguish dynamic features from static ones. All the results in this thesis are validated on training setups, which mimics some of the challenges of real surgery and on setups that simulate the human body within Autonomous Robotic Surgery (ARS) and Smart Autonomous Robotic Assistant Surgeon (SARAS) projects

    High-precision grasping and placing for mobile robots

    Get PDF
    This work presents a manipulation system for multiple labware in life science laboratories using the H20 mobile robots. The H20 robot is equipped with the Kinect V2 sensor to identify and estimate the position of the required labware on the workbench. The local features recognition based on SURF algorithm is used. The recognition process is performed for the labware to be grasped and for the workbench holder. Different grippers and labware containers are designed to manipulate different weights of labware and to realize a safe transportation

    Intelligent Navigation Service Robot Working in a Flexible and Dynamic Environment

    Get PDF
    Numerous sensor fusion techniques have been reported in the literature for a number of robotics applications. These techniques involved the use of different sensors in different configurations. However, in the case of food driving, the possibility of the implementation has been overlooked. In restaurants and food delivery spots, enhancing the food transfer to the correct table is neatly required, without running into other robots or diners or toppling over. In this project, a particular algorithm module has been proposed and implemented to enhance the robot driving methodology and maximize robot functionality, accuracy, and the food transfer experience. The emphasis has been on enhancing movement accuracy to reach the targeted table from the start to the end. Four major elements have been designed to complete this project, including mechanical, electrical, electronics, and programming. Since the floor condition greatly affecting the wheels and turning angle selection, the movement accuracy was improved during the project. The robot was successfully able to receive the command from the restaurant and go to deliver the food to the customers\u27 tables, considering any obstacles on the way to avoid. The robot has equipped with two trays to mount the food with well-configured voices to welcome and greet the customer. The performance has been evaluated and undertaken using a routine robot movement tests. As part of this study, the designed service wheeled robot required to be with a high-performance real-time processor. As long as the processor was adequate, the experimental results showed a highly effective search robot methodology. Having concluded from the study that a minimum number of sensors are needed if they are placed appropriately and used effectively on a robot\u27s body, as navigation could be performed by using a small set of sensors. The Arduino Due has been used to provide a real-time operating system. It has provided a very successful data processing and transfer throughout any regular operation. Furthermore, an easy-to-use application has been developed to improve the user experience, so that the operator can interact directly with the robot via a special setting screen. It is possible, using this feature, to modify advanced settings such as voice commands or IP address without having to return back to the code

    Progettazione e Controllo di Mani Robotiche

    Get PDF
    The application of dexterous robotic hands out of research laboratories has been limited by the intrinsic complexity that these devices present. This is directly reflected as an economically unreasonable cost and a low overall reliability. Within the research reported in this thesis it is shown how the problem of complexity in the design of robotic hands can be tackled, taking advantage of modern technologies (i.e. rapid prototyping), leading to innovative concepts for the design of the mechanical structure, the actuation and sensory systems. The solutions adopted drastically reduce the prototyping and production costs and increase the reliability, reducing the number of parts required and averaging their single reliability factors. In order to get guidelines for the design process, the problem of robotic grasp and manipulation by a dual arm/hand system has been reviewed. In this way, the requirements that should be fulfilled at hardware level to guarantee successful execution of the task has been highlighted. The contribution of this research from the manipulation planning side focuses on the redundancy resolution that arise in the execution of the task in a dexterous arm/hand system. In literature the problem of coordination of arm and hand during manipulation of an object has been widely analyzed in theory but often experimentally demonstrated in simplified robotic setup. Our aim is to cover the lack in the study of this topic and experimentally evaluate it in a complex system as a anthropomorphic arm hand system
    • 

    corecore