5,848 research outputs found

    Causative role of left aIPS in coding shared goals during human-avatar complementary joint actions

    Get PDF
    Successful motor interactions require agents to anticipate what a partner is doing in order to predictively adjust their own movements. Although the neural underpinnings of the ability to predict others' action goals have been well explored during passive action observation, no study has yet clarified any critical neural substrate supporting interpersonal coordination during active, non-imitative (complementary) interactions. Here, we combine non-invasive inhibitory brain stimulation (continuous Theta Burst Stimulation) with a novel human-avatar interaction task to investigate a causal role for higher-order motor cortical regions in supporting the ability to predict and adapt to others' actions. We demonstrate that inhibition of left anterior intraparietal sulcus (aIPS), but not ventral premotor cortex, selectively impaired individuals' performance during complementary interactions. Thus, in addition to coding observed and executed action goals, aIPS is crucial in coding 'shared goals', that is, integrating predictions about one's and others' complementary actions

    A Hierarchical Architecture for Flexible Human-Robot Collaboration

    Get PDF
    This thesis is devoted to design a software architecture for Human- Robot Collaboration (HRC), to enhance the robots\u2019 abilities for working alongside humans. We propose FlexHRC, a hierarchical and flexible human-robot cooperation architecture specifically designed to provide collaborative robots with an extended degree of autonomy when supporting human operators in tasks with high-variability. Along with FlexHRC, we have introduced novel techniques appropriate for three interleaved levels, namely perception, representation, and action, each one aimed at addressing specific traits of humanrobot cooperation tasks. The Industry 4.0 paradigm emphasizes the crucial benefits that collaborative robots could bring to the whole production process. In this context, a yet unreached enabling technology is the design of robots able to deal at all levels with humans\u2019 intrinsic variability, which is not only a necessary element to a comfortable working experience for humans but also a precious capability for efficiently dealing with unexpected events. Moreover, a flexible assembly of semi-finished products is one of the expected features of next-generation shop-floor lines. Currently, such flexibility is placed on the shoulders of human operators, who are responsible for product variability, and therefore they are subject to potentially high stress levels and cognitive load when dealing with complex operations. At the same time, operations in the shop-floor are still very structured and well-defined. Collaborative robots have been designed to allow for a transition of such burden from human operators to robots that are flexible enough to support them in high-variability tasks while they unfold. As mentioned before, FlexHRC architecture encompasses three perception, action, and representation levels. The perception level relies on wearable sensors for human action recognition and point cloud data for perceiving the object in the scene. The action level embraces four components, the robot execution manager for decoupling action planning from robot motion planning and mapping the symbolic actions to the robot controller command interface, a task Priority framework to control the robot, a differential equation solver to simulate and evaluate the robot behaviour on-the-fly, and finally a random-based method for the robot path planning. The representation level depends on AND/OR graphs for the representation of and the reasoning upon human-robot cooperation models online, a task manager to plan, adapt, and make decision for the robot behaviors, and a knowledge base in order to store the cooperation and workspace information. We evaluated the FlexHRC functionalities according to the application desired objectives. This evaluation is accompanied with several experiments, namely collaborative screwing task, coordinated transportation of the objects in cluttered environment, collaborative table assembly task, and object positioning tasks. The main contributions of this work are: (i) design and implementation of FlexHRC which enables the functional requirements necessary for the shop-floor assembly application such as task and team level flexibility, scalability, adaptability, and safety just a few to name, (ii) development of the task representation, which integrates a hierarchical AND/OR graph whose online behaviour is formally specified using First Order Logic, (iii) an in-the-loop simulation-based decision making process for the operations of collaborative robots coping with the variability of human operator actions, (iv) the robot adaptation to the human on-the-fly decisions and actions via human action recognition, and (v) the predictable robot behavior to the human user thanks to the task priority based control frame, the introduced path planner, and the natural and intuitive communication of the robot with the human

    Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning

    Get PDF
    Using touch devices to navigate in virtual 3D environments such as computer assisted design (CAD) models or geographical information systems (GIS) is inherently difficult for humans, as the 3D operations have to be performed by the user on a 2D touch surface. This ill-posed problem is classically solved with a fixed and handcrafted interaction protocol, which must be learned by the user. We propose to automatically learn a new interaction protocol allowing to map a 2D user input to 3D actions in virtual environments using reinforcement learning (RL). A fundamental problem of RL methods is the vast amount of interactions often required, which are difficult to come by when humans are involved. To overcome this limitation, we make use of two collaborative agents. The first agent models the human by learning to perform the 2D finger trajectories. The second agent acts as the interaction protocol, interpreting and translating to 3D operations the 2D finger trajectories from the first agent. We restrict the learned 2D trajectories to be similar to a training set of collected human gestures by first performing state representation learning, prior to reinforcement learning. This state representation learning is addressed by projecting the gestures into a latent space learned by a variational auto encoder (VAE).Comment: 17 pages, 8 figures. Accepted at The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases 2019 (ECMLPKDD 2019

    Exploring Natural User Abstractions For Shared Perceptual Manipulator Task Modeling & Recovery

    Get PDF
    State-of-the-art domestic robot assistants are essentially autonomous mobile manipulators capable of exerting human-scale precision grasps. To maximize utility and economy, non-technical end-users would need to be nearly as efficient as trained roboticists in control and collaboration of manipulation task behaviors. However, it remains a significant challenge given that many WIMP-style tools require superficial proficiency in robotics, 3D graphics, and computer science for rapid task modeling and recovery. But research on robot-centric collaboration has garnered momentum in recent years; robots are now planning in partially observable environments that maintain geometries and semantic maps, presenting opportunities for non-experts to cooperatively control task behavior with autonomous-planning agents exploiting the knowledge. However, as autonomous systems are not immune to errors under perceptual difficulty, a human-in-the-loop is needed to bias autonomous-planning towards recovery conditions that resume the task and avoid similar errors. In this work, we explore interactive techniques allowing non-technical users to model task behaviors and perceive cooperatively with a service robot under robot-centric collaboration. We evaluate stylus and touch modalities that users can intuitively and effectively convey natural abstractions of high-level tasks, semantic revisions, and geometries about the world. Experiments are conducted with \u27pick-and-place\u27 tasks in an ideal \u27Blocks World\u27 environment using a Kinova JACO six degree-of-freedom manipulator. Possibilities for the architecture and interface are demonstrated with the following features; (1) Semantic \u27Object\u27 and \u27Location\u27 grounding that describe function and ambiguous geometries (2) Task specification with an unordered list of goal predicates, and (3) Guiding task recovery with implied scene geometries and trajectory via symmetry cues and configuration space abstraction. Empirical results from four user studies show our interface was much preferred than the control condition, demonstrating high learnability and ease-of-use that enable our non-technical participants to model complex tasks, provide effective recovery assistance, and teleoperative control

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Dexterous Manipulation Graphs

    Full text link
    We propose the Dexterous Manipulation Graph as a tool to address in-hand manipulation and reposition an object inside a robot's end-effector. This graph is used to plan a sequence of manipulation primitives so to bring the object to the desired end pose. This sequence of primitives is translated into motions of the robot to move the object held by the end-effector. We use a dual arm robot with parallel grippers to test our method on a real system and show successful planning and execution of in-hand manipulation

    The Evidence Hub: harnessing the collective intelligence of communities to build evidence-based knowledge

    Get PDF
    Conventional document and discussion websites provide users with no help in assessing the quality or quantity of evidence behind any given idea. Besides, the very meaning of what evidence is may not be unequivocally defined within a community, and may require deep understanding, common ground and debate. An Evidence Hub is a tool to pool the community collective intelligence on what is evidence for an idea. It provides an infrastructure for debating and building evidence-based knowledge and practice. An Evidence Hub is best thought of as a filter onto other websites — a map that distills the most important issues, ideas and evidence from the noise by making clear why ideas and web resources may be worth further investigation. This paper describes the Evidence Hub concept and rationale, the breath of user engagement and the evolution of specific features, derived from our work with different community groups in the healthcare and educational sector
    • …
    corecore