2,738 research outputs found

    Social Cognition for Human-Robot Symbiosis—Challenges and Building Blocks

    Get PDF
    The next generation of robot companions or robot working partners will need to satisfy social requirements somehow similar to the famous laws of robotics envisaged by Isaac Asimov time ago (Asimov, 1942). The necessary technology has almost reached the required level, including sensors and actuators, but the cognitive organization is still in its infancy and is only partially supported by the current understanding of brain cognitive processes. The brain of symbiotic robots will certainly not be a “positronic” replica of the human brain: probably, the greatest part of it will be a set of interacting computational processes running in the cloud. In this article, we review the challenges that must be met in the design of a set of interacting computational processes as building blocks of a cognitive architecture that may give symbiotic capabilities to collaborative robots of the next decades: (1) an animated body-schema; (2) an imitation machinery; (3) a motor intentions machinery; (4) a set of physical interaction mechanisms; and (5) a shared memory system for incremental symbiotic development. We would like to stress that our approach is totally un-hierarchical: the five building blocks of the shared cognitive architecture are fully bi-directionally connected. For example, imitation and intentional processes require the “services” of the animated body schema which, on the other hand, can run its simulations if appropriately prompted by imitation and/or intention, with or without physical interaction. Successful experiences can leave a trace in the shared memory system and chunks of memory fragment may compete to participate to novel cooperative actions. And so on and so forth. At the heart of the system is lifelong training and learning but, different from the conventional learning paradigms in neural networks, where learning is somehow passively imposed by an external agent, in symbiotic robots there is an element of free choice of what is worth learning, driven by the interaction between the robot and the human partner. The proposed set of building blocks is certainly a rough approximation of what is needed by symbiotic robots but we believe it is a useful starting point for building a computational framework

    Learning shared control by demonstration for personalized wheelchair assistance

    Get PDF
    An emerging research problem in assistive robotics is the design of methodologies that allow robots to provide personalized assistance to users. For this purpose, we present a method to learn shared control policies from demonstrations offered by a human assistant. We train a Gaussian process (GP) regression model to continuously regulate the level of assistance between the user and the robot, given the user's previous and current actions and the state of the environment. The assistance policy is learned after only a single human demonstration, i.e. in one-shot. Our technique is evaluated in a one-of-a-kind experimental study, where the machine-learned shared control policy is compared to human assistance. Our analyses show that our technique is successful in emulating human shared control, by matching the location and amount of offered assistance on different trajectories. We observed that the effort requirement of the users were comparable between human-robot and human-human settings. Under the learned policy, the jerkiness of the user's joystick movements dropped significantly, despite a significant increase in the jerkiness of the robot assistant's commands. In terms of performance, even though the robotic assistance increased task completion time, the average distance to obstacles stayed in similar ranges to human assistance

    Shared control strategies for automated vehicles

    Get PDF
    188 p.Los vehículos automatizados (AVs) han surgido como una solución tecnológica para compensar las deficiencias de la conducción manual. Sin embargo, esta tecnología aún no está lo suficientemente madura para reemplazar completamente al conductor, ya que esto plantea problemas técnicos, sociales y legales. Sin embargo, los accidentes siguen ocurriendo y se necesitan nuevas soluciones tecnológicas para mejorar la seguridad vial. En este contexto, el enfoque de control compartido, en el que el conductor permanece en el bucle de control y, junto con la automatización, forma un equipo bien coordinado que colabora continuamente en los niveles táctico y de control de la tarea de conducción, es una solución prometedora para mejorar el rendimiento de la conducción manual aprovechando los últimos avances en tecnología de conducción automatizada. Esta estrategia tiene como objetivo promover el desarrollo de sistemas de asistencia al conductor más avanzados y con mayor grade de cooperatición en comparación con los disponibles en los vehículos comerciales. En este sentido, los vehículos automatizados serán los supervisores que necesitan los conductores, y no al revés. La presente tesis aborda en profundidad el tema del control compartido en vehículos automatizados, tanto desde una perspectiva teórica como práctica. En primer lugar, se proporciona una revisión exhaustiva del estado del arte para brindar una descripción general de los conceptos y aplicaciones en los que los investigadores han estado trabajando durante lasúltimas dos décadas. Luego, se adopta un enfoque práctico mediante el desarrollo de un controlador para ayudar al conductor en el control lateral del vehículo. Este controlador y su sistema de toma de decisiones asociado (Módulo de Arbitraje) se integrarán en el marco general de conducción automatizada y se validarán en una plataforma de simulación con conductores reales. Finalmente, el controlador desarrollado se aplica a dos sistemas. El primero para asistir a un conductor distraído y el otro en la implementación de una función de seguridad para realizar maniobras de adelantamiento en carreteras de doble sentido. Al finalizar, se presentan las conclusiones más relevantes y las perspectivas de investigación futuras para el control compartido en la conducción automatizada

    Intention recognition for dynamic role exchange in haptic collaboration

    No full text
    In human-computer collaboration involving haptics, a key issue that remains to be solved is to establish an intuitive communication between the partners. Even though computers are widely used to aid human operators in teleoperation, guidance, and training, because they lack the adaptability, versatility, and awareness of a human, their ability to improve efficiency and effectiveness in dynamic tasks is limited. We suggest that the communication between a human and a computer can be improved if it involves a decision-making process in which the computer is programmed to infer the intentions of the human operator and dynamically adjust the control levels of the interacting parties to facilitate a more intuitive interaction setup. In this paper, we investigate the utility of such a dynamic role exchange mechanism, where partners negotiate through the haptic channel to trade their control levels on a collaborative task. We examine the energy consumption, the work done on the manipulated object, and the joint efficiency in addition to the task performance. We show that when compared to an equal control condition, a role exchange mechanism improves task performance and the joint efficiency of the partners. We also show that augmenting the system with additional informative visual and vibrotactile cues, which are used to display the state of interaction, allows the users to become aware of the underlying role exchange mechanism and utilize it in favor of the task. These cues also improve the users sense of interaction and reinforce his/her belief that the computer aids with the execution of the task. © 2013 IEEE

    A hierarchical sensorimotor control framework for human-in-the-loop robotic hands.

    Get PDF
    Human manual dexterity relies critically on touch. Robotic and prosthetic hands are much less dexterous and make little use of the many tactile sensors available. We propose a framework modeled on the hierarchical sensorimotor controllers of the nervous system to link sensing to action in human-in-the-loop, haptically enabled, artificial hands

    Haptic negotiation and role exchange for collaboration in virtual environments

    Get PDF
    We investigate how collaborative guidance can be realized in multi-modal virtual environments for dynamic tasks involving motor control. Haptic guidance in our context can be defined as any form of force/tactile feedback that the computer generates to help a user execute a task in a faster, more accurate, and subjectively more pleasing fashion. In particular, we are interested in determining guidance mechanisms that best facilitate task performance and arouse a natural sense of collaboration. We suggest that a haptic guidance system can be further improved if it is supplemented with a role exchange mechanism, which allows the computer to adjust the forces it applies to the user in response to his/her actions. Recent work on collaboration and role exchange presented new perspectives on defining roles and interaction. However existing approaches mainly focus on relatively basic environments where the state of the system can be defined with a few parameters. We designed and implemented a complex and highly dynamic multimodal game for testing our interaction model. Since the state space of our application is complex, role exchange needs to be implemented carefully. We defined a novel negotiation process, which facilitates dynamic communication between the user and the computer, and realizes the exchange of roles using a three-state finite state machine. Our preliminary results indicate that even though the negotiation and role exchange mechanism we adopted does not improve performance in every evaluation criteria, it introduces a more personal and human-like interaction model
    corecore