53,433 research outputs found

    A Robot Model of OC-Spectrum Disorders : Design Framework, Implementation and First Experiments

    Get PDF
    © 2019 Massachusetts Institute of TechnologyComputational psychiatry is increasingly establishing itself as valuable discipline for understanding human mental disorders. However, robot models and their potential for investigating embodied and contextual aspects of mental health have been, to date, largely unexplored. In this paper, we present an initial robot model of obsessive-compulsive (OC) spectrum disorders based on an embodied motivation-based control architecture for decision making in autonomous robots. The OC family of conditions is chiefly characterized by obsessions (recurrent, invasive thoughts) and/or compulsions (an urge to carry out certain repetitive or ritualized behaviors). The design of our robot model follows and illustrates a general design framework that we have proposed to ground research in robot models of mental disorders, and to link it with existing methodologies in psychiatry, and notably in the design of animal models. To test and validate our model, we present and discuss initial experiments, results and quantitative and qualitative analysis regarding the compulsive and obsessive elements of OC-spectrum disorders. While this initial stage of development only models basic elements of such disorders, our results already shed light on aspects of the underlying theoretical model that are not obvious simply from consideration of the model.Peer reviewe

    Perspective Taking Through Simulation

    No full text
    Robots that operate among humans need to be able to attribute mental states in order to facilitate learning through imitation and collaboration. The success of the simulation theory approach for attributing mental states to another person relies on the ability to take the perspective of that person, typically by generating pretend states from that person’s point of view. In this paper, internal inverse and forward models are coupled to create simulation processes that may be used for mental state attribution: simulation of the visual process is used to attribute perceptions, and simulation of the motor control process is used to attribute potential actions. To demonstrate the approach, experiments are performed with a robot attributing perceptions and potential actions to a second robot

    Explainable shared control in assistive robotics

    Get PDF
    Shared control plays a pivotal role in designing assistive robots to complement human capabilities during everyday tasks. However, traditional shared control relies on users forming an accurate mental model of expected robot behaviour. Without this accurate mental image, users may encounter confusion or frustration whenever their actions do not elicit the intended system response, forming a misalignment between the respective internal models of the robot and human. The Explainable Shared Control paradigm introduced in this thesis attempts to resolve such model misalignment by jointly considering assistance and transparency. There are two perspectives of transparency to Explainable Shared Control: the human's and the robot's. Augmented reality is presented as an integral component that addresses the human viewpoint by visually unveiling the robot's internal mechanisms. Whilst the robot perspective requires an awareness of human "intent", and so a clustering framework composed of a deep generative model is developed for human intention inference. Both transparency constructs are implemented atop a real assistive robotic wheelchair and tested with human users. An augmented reality headset is incorporated into the robotic wheelchair and different interface options are evaluated across two user studies to explore their influence on mental model accuracy. Experimental results indicate that this setup facilitates transparent assistance by improving recovery times from adverse events associated with model misalignment. As for human intention inference, the clustering framework is applied to a dataset collected from users operating the robotic wheelchair. Findings from this experiment demonstrate that the learnt clusters are interpretable and meaningful representations of human intent. This thesis serves as a first step in the interdisciplinary area of Explainable Shared Control. The contributions to shared control, augmented reality and representation learning contained within this thesis are likely to help future research advance the proposed paradigm, and thus bolster the prevalence of assistive robots.Open Acces

    Robot Models of Mental Disorders

    Get PDF
    Alongside technological tools to support wellbeing and treatment of mental disorders, models of these disorders can also be invaluable tools to understand, support and improve these conditions. Robots can provide ecologically valid models that take into account embodiment-, interaction-, and context-related elements. Focusing on Obsessive-Compulsive spectrum disorders, in this paper we discuss some of the potential contributions of robot models and relate them to other models used in psychology and psychiatry, particularly animal models. We also present some initial recommendations for their meaningful design and rigorous use.Final Accepted Versio

    Artificial Cognition for Social Human-Robot Interaction: An Implementation

    Get PDF
    © 2017 The Authors Human–Robot Interaction challenges Artificial Intelligence in many regards: dynamic, partially unknown environments that were not originally designed for robots; a broad variety of situations with rich semantics to understand and interpret; physical interactions with humans that requires fine, low-latency yet socially acceptable control strategies; natural and multi-modal communication which mandates common-sense knowledge and the representation of possibly divergent mental models. This article is an attempt to characterise these challenges and to exhibit a set of key decisional issues that need to be addressed for a cognitive robot to successfully share space and tasks with a human. We identify first the needed individual and collaborative cognitive skills: geometric reasoning and situation assessment based on perspective-taking and affordance analysis; acquisition and representation of knowledge models for multiple agents (humans and robots, with their specificities); situated, natural and multi-modal dialogue; human-aware task planning; human–robot joint task achievement. The article discusses each of these abilities, presents working implementations, and shows how they combine in a coherent and original deliberative architecture for human–robot interaction. Supported by experimental results, we eventually show how explicit knowledge management, both symbolic and geometric, proves to be instrumental to richer and more natural human–robot interactions by pushing for pervasive, human-level semantics within the robot's deliberative system

    Intrinsic Motivation and Mental Replay enable Efficient Online Adaptation in Stochastic Recurrent Networks

    Full text link
    Autonomous robots need to interact with unknown, unstructured and changing environments, constantly facing novel challenges. Therefore, continuous online adaptation for lifelong-learning and the need of sample-efficient mechanisms to adapt to changes in the environment, the constraints, the tasks, or the robot itself are crucial. In this work, we propose a novel framework for probabilistic online motion planning with online adaptation based on a bio-inspired stochastic recurrent neural network. By using learning signals which mimic the intrinsic motivation signalcognitive dissonance in addition with a mental replay strategy to intensify experiences, the stochastic recurrent network can learn from few physical interactions and adapts to novel environments in seconds. We evaluate our online planning and adaptation framework on an anthropomorphic KUKA LWR arm. The rapid online adaptation is shown by learning unknown workspace constraints sample-efficiently from few physical interactions while following given way points.Comment: accepted in Neural Network

    Eye Tracking as a Control Interface for Tele-Operation During a Visual Search Task

    Get PDF
    This study examined the utility of eye-tracking as a control method during tele-operation in a simulated task environment. Operators used a simulator to tele-operate a search robot using three different control methods: fully manual, hybrid, and eye-only. Using Endsely’s (1995a) three level SA model and a natural interface (e.g., eye-tracking) as a more user-centered approach to tele-operation, the study measured objective, electroencephalogram, and subjective (NASA-TLX) measures to reflect both workload and situation awareness during tele-operation. The results showed a significant reduction in mental workload, as reflected by EEG measures. However a significant effect was found where the operators’ perceived mental workload scores, as reflected by the TLX, significantly increased while using the natural interface. The difference in perceived mental workload was also mirrored by a post hoc analysis where frustration scores, also reflected by the TLX, supported the initial findings of the differences in perceived mental workload scores between the three conditions. The results of this study can be explained by both incomplete mental models of motor movements and differences in affordances offered by the different control conditions. Additional considerations for system designers and future research are also discussed

    Vision-based deep execution monitoring

    Full text link
    Execution monitor of high-level robot actions can be effectively improved by visual monitoring the state of the world in terms of preconditions and postconditions that hold before and after the execution of an action. Furthermore a policy for searching where to look at, either for verifying the relations that specify the pre and postconditions or to refocus in case of a failure, can tremendously improve the robot execution in an uncharted environment. It is now possible to strongly rely on visual perception in order to make the assumption that the environment is observable, by the amazing results of deep learning. In this work we present visual execution monitoring for a robot executing tasks in an uncharted Lab environment. The execution monitor interacts with the environment via a visual stream that uses two DCNN for recognizing the objects the robot has to deal with and manipulate, and a non-parametric Bayes estimation to discover the relations out of the DCNN features. To recover from lack of focus and failures due to missed objects we resort to visual search policies via deep reinforcement learning

    Applications of Biological Cell Models in Robotics

    Full text link
    In this paper I present some of the most representative biological models applied to robotics. In particular, this work represents a survey of some models inspired, or making use of concepts, by gene regulatory networks (GRNs): these networks describe the complex interactions that affect gene expression and, consequently, cell behaviour
    • …
    corecore