88 research outputs found

    Novel Methods For Human-robot Shared Control In Collaborative Robotics

    Get PDF
    Blended shared control is a method to continuously combine control inputs from traditional automatic control systems and human operators for control of machines. An automatic control system generates control input based on feedback of measured signals, whereas a human operator generates control input based on experience, task knowledge, and awareness and sensing of the environment in which the machine is operating. Such active blending of inputs from the automatic control agent and the human agent to jointly control machines is expected to provide benefits in terms of utilizing the unique features of both agents, i.e., better task execution performance of automatic control systems based on sensed signals and maintaining situation awareness by having the human in the loop to handle safety concerns and environmental uncertainties. The shared control approach in this sense provides an alternative to full autonomy. Many existing and future applications of such an approach include automobiles, underwater vehicles, ships, airplanes, construction machines, space manipulators, surgery robots, and power wheelchairs, where machines are still mostly operated by human operators for safety concerns. Developing machines for full autonomy requires not only advances in machines but also the ability to sense the environment by placing sensors in it; the latter could be a very difficult task for many such applications due to perceived uncertainties and changing conditions. The notion of blended shared control, as a more practical alternative to full autonomy, enables keeping the human operator in the loop to initiate machine actions with real-time intelligent assistance provided by automatic control. The problem of how to blend the two inputs and development of associated scientific tools to formalize and achieve blended shared control is the focus of this work. Specifically, the following essential aspects are investigated and studied. Task learning: modeling of a human-operated robotic task from demonstration into subgoals such that execution patterns are captured in a simple manner and provide reference for human intent prediction and automatic control generation. Intent prediction: prediction of human operator's intent in the framework of subgoal models such that it encodes the probability of a human operator seeking a particular subgoal. Input blending: generating automatic control input and dynamically combining it with human operator's input based on prediction probability; this will also account for situations where the human operator may take unexpected actions to avoid danger by yielding full control authority to the human operator. Subgoal adjustment: adjusting the learned, nominal task model dynamically to adapt to task changes, such as changes to target object, which will cause the nominal model learned from demonstration to lose its effectiveness. This dissertation formalizes these notions and develops novel tools and algorithms for enabling blended shared control. To evaluate the developed scientific tools and algorithms, a scaled hydraulic excavator for a typical trenching and truck-loading task is employed as a specific example. Experimental results are provided to corroborate the tools and methods. To expand the developed methods and further explore shared control with different applications, this dissertation also studied the collaborative operation of robot manipulators. Specifically, various operational interfaces are systematically designed, a hybrid force-motion controller is integrated with shared control in a mixed world-robot frame to facilitate human-robot collaboration, and a method that utilizes vision-based feedback to predict the human operator's intent and provides shared control assistance is proposed. These methods provide ways for human operators to remotely control robotic manipulators effectively while receiving assistance by intelligent shared control in different applications. Several robotic manipulation experiments were conducted to corroborate the expanded shared control methods by utilizing different industrial robots

    High-speed robot control in complex environments

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1988.Bibliography: leaves 201-206.by Wyatt S. Newman.Ph.D

    Scheduling and control in the batch process industry using hybrid knowledge based simulation

    Get PDF
    This thesis relates to the area of short term scheduling and control in batch process plants. A batch process plant consists of individual plant items linked by a pipe network through which product is routed. The structure of the network and the valve arrangements which control the routing severely constrains the availability of plant items for configuration in routes when a plant is operating. Current approaches to short term scheduling contain simplifying assumptions which ignore these constraints and this leads to unrealistic and infeasible schedules. The work undertaken investigates the use of techniques from the areas of Artificial Intelligence (AI) and Discrete Event Simulation (DES) in order to overcome these simplifying assumptions and develop good schedules which can be implemented in a plant. The main divisions of work cover a number of areas. The development of a representation scheme for batch plant networks, and procedures for reasoning about the constraints imposed by their structure to infer the actual availability of plant items for routing purposes at any time. The development of a dynamic rule-based route configuration procedure which takes into account the constraints on plant item availability. The development of an activity scheduling framework for batch plants based on this. The development of a dynamic simulation model to take account of finite capacity constraints in a batch plant. The integration of these elements in a hybrid structure to make best use of the techniques available from the areas of AI and DES. The representation scheme and procedures developed for reasoning about the constraints in a plant network enable the simplifying assumptions of other approaches to be overcome so that the system can produce good feasible schedules. The hybrid structure is a practical one to take for implementation and enables the best use of techniques from AI and DES

    Mental-State Estimation, 1987

    Get PDF
    Reports on the measurement and evaluation of the physiological and mental state of operators are presented

    Human-Robot Interaction architecture for interactive and lively social robots

    Get PDF
    Mención Internacional en el título de doctorLa sociedad está experimentando un proceso de envejecimiento que puede provocar un desequilibrio entre la población en edad de trabajar y aquella fuera del mercado de trabajo. Una de las soluciones a este problema que se están considerando hoy en día es la introducción de robots en multiples sectores, incluyendo el de servicios. Sin embargo, para que esto sea una solución viable, estos robots necesitan ser capaces de interactuar con personas de manera satisfactoria, entre otras habilidades. En el contexto de la aplicación de robots sociales al cuidado de mayores, esta tesis busca proporcionar a un robot social las habilidades necesarias para crear interacciones entre humanos y robots que sean naturales. En concreto, esta tesis se centra en tres problemas que deben ser solucionados: (i) el modelado de interacciones entre humanos y robots; (ii) equipar a un robot social con las capacidades expresivas necesarias para una comunicación satisfactoria; y (iii) darle al robot una apariencia vivaz. La solución al problema de modelado de diálogos presentada en esta tesis propone diseñar estos diálogos como una secuencia de elementos atómicos llamados Actos Comunicativos (CAs, por sus siglas en inglés). Se pueden parametrizar en tiempo de ejecución para completar diferentes objetivos comunicativos, y están equipados con mecanismos para manejar algunas de las imprecisiones que pueden aparecer durante interacciones. Estos CAs han sido identificados a partir de la combinación de dos dimensiones: iniciativa (si la tiene el robot o el usuario) e intención (si se pretende obtener o proporcionar información). Estos CAs pueden ser combinados siguiendo una estructura jerárquica para crear estructuras mas complejas que sean reutilizables. Esto simplifica el proceso para crear nuevas interacciones, permitiendo a los desarrolladores centrarse exclusivamente en diseñar el flujo del diálogo, sin tener que preocuparse de reimplementar otras funcionalidades que tienen que estar presentes en todas las interacciones (como el manejo de errores, por ejemplo). La expresividad del robot está basada en el uso de una librería de gestos, o expresiones, multimodales predefinidos, modelados como estructuras similares a máquinas de estados. El módulo que controla la expresividad recibe peticiones para realizar dichas expresiones, planifica su ejecución para evitar cualquier conflicto que pueda aparecer, las carga, y comprueba que su ejecución se complete sin problemas. El sistema es capaz también de generar estas expresiones en tiempo de ejecución a partir de una lista de acciones unimodales (como decir una frase, o mover una articulación). Una de las características más importantes de la arquitectura de expresividad propuesta es la integración de una serie de métodos de modulación que pueden ser usados para modificar los gestos del robot en tiempo de ejecución. Esto permite al robot adaptar estas expresiones en base a circunstancias particulares (aumentando al mismo tiempo la variabilidad de la expresividad del robot), y usar un número limitado de gestos para mostrar diferentes estados internos (como el estado emocional). Teniendo en cuenta que ser reconocido como un ser vivo es un requisito para poder participar en interacciones sociales, que un robot social muestre una apariencia de vivacidad es un factor clave en interacciones entre humanos y robots. Para ello, esta tesis propone dos soluciones. El primer método genera acciones a través de las diferentes interfaces del robot a intervalos. La frecuencia e intensidad de estas acciones están definidas en base a una señal que representa el pulso del robot. Dicha señal puede adaptarse al contexto de la interacción o al estado interno del robot. El segundo método enriquece las interacciones verbales entre el robot y el usuario prediciendo los gestos no verbales más apropiados en base al contenido del diálogo y a la intención comunicativa del robot. Un modelo basado en aprendizaje automático recibe la transcripción del mensaje verbal del robot, predice los gestos que deberían acompañarlo, y los sincroniza para que cada gesto empiece en el momento preciso. Este modelo se ha desarrollado usando una combinación de un encoder diseñado con una red neuronal Long-Short Term Memory, y un Conditional Random Field para predecir la secuencia de gestos que deben acompañar a la frase del robot. Todos los elementos presentados conforman el núcleo de una arquitectura de interacción humano-robot modular que ha sido integrada en múltiples plataformas, y probada bajo diferentes condiciones. El objetivo central de esta tesis es contribuir al área de interacción humano-robot con una nueva solución que es modular e independiente de la plataforma robótica, y que se centra en proporcionar a los desarrolladores las herramientas necesarias para desarrollar aplicaciones que requieran interacciones con personas.Society is experiencing a series of demographic changes that can result in an unbalance between the active working and non-working age populations. One of the solutions considered to mitigate this problem is the inclusion of robots in multiple sectors, including the service sector. But for this to be a viable solution, among other features, robots need to be able to interact with humans successfully. This thesis seeks to endow a social robot with the abilities required for a natural human-robot interactions. The main objective is to contribute to the body of knowledge on the area of Human-Robot Interaction with a new, platform-independent, modular approach that focuses on giving roboticists the tools required to develop applications that involve interactions with humans. In particular, this thesis focuses on three problems that need to be addressed: (i) modelling interactions between a robot and an user; (ii) endow the robot with the expressive capabilities required for a successful communication; and (iii) endow the robot with a lively appearance. The approach to dialogue modelling presented in this thesis proposes to model dialogues as a sequence of atomic interaction units, called Communicative Acts, or CAs. They can be parametrized in runtime to achieve different communicative goals, and are endowed with mechanisms oriented to solve some of the uncertainties related to interaction. Two dimensions have been used to identify the required CAs: initiative (the robot or the user), and intention (either retrieve information or to convey it). These basic CAs can be combined in a hierarchical manner to create more re-usable complex structures. This approach simplifies the creation of new interactions, by allowing developers to focus exclusively on designing the flow of the dialogue, without having to re-implement functionalities that are common to all dialogues (like error handling, for example). The expressiveness of the robot is based on the use of a library of predefined multimodal gestures, or expressions, modelled as state machines. The module managing the expressiveness receives requests for performing gestures, schedules their execution in order to avoid any possible conflict that might arise, loads them, and ensures that their execution goes without problems. The proposed approach is also able to generate expressions in runtime based on a list of unimodal actions (an utterance, the motion of a limb, etc...). One of the key features of the proposed expressiveness management approach is the integration of a series of modulation techniques that can be used to modify the robot’s expressions in runtime. This would allow the robot to adapt them to the particularities of a given situation (which would also increase the variability of the robot expressiveness), and to display different internal states with the same expressions. Considering that being recognized as a living being is a requirement for engaging in social encounters, the perception of a social robot as a living entity is a key requirement to foster human-robot interactions. In this dissertation, two approaches have been proposed. The first method generates actions for the different interfaces of the robot at certain intervals. The frequency and intensity of these actions are defined by a signal that represents the pulse of the robot, which can be adapted to the context of the interaction or the internal state of the robot. The second method enhances the robot’s utterance by predicting the appropriate non-verbal expressions that should accompany them, according to the content of the robot’s message, as well as its communicative intention. A deep learning model receives the transcription of the robot’s utterances, predicts which expressions should accompany it, and synchronizes them, so each gesture selected starts at the appropriate time. The model has been developed using a combination of a Long-Short Term Memory network-based encoder and a Conditional Random Field for generating a sequence of gestures that are combined with the robot’s utterance. All the elements presented above conform the core of a modular Human-Robot Interaction architecture that has been integrated in multiple platforms, and tested under different conditions.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Fernando Torres Medina.- Secretario: Concepción Alicia Monje Micharet.- Vocal: Amirabdollahian Farshi

    Managing multiple goals in university-industry collaboration

    Get PDF
    Doctoral thesis (PhD) – Nord University, 2021publishedVersio

    Multilevel goal management in toddlers’ and preschoolers’ action sequence planning and execution

    Get PDF
    Action planning is the foundation of everyday behaviour, allowing us to interact with the world. Even infants are able to plan simple one or two steps actions. However, in daily life, adults plan, execute and control complex action sequences, often following multiple levels goals or multiple constraints. For example, even something as simple as making a cup of coffee in the morning involves following a goal hierarchy with a key goal, multiple subgoals and action steps. The actor has to keep track of the key goal of making a cup of coffee throughout the task, while maintaining which of the subgoals and action steps has already been executed and which should be executed next. While previous studies suggest that the ability to plan and execute action sequences develops over the preschool years (Freier et al., 2017; Yanaoka & Saito, 2017; 2019), the exact underlying development mechanisms remain unclear. It has been suggested that both executive function improvements and motor competence could be related to development of action sequence planning and execution. The aim of this thesis is to understand action sequence development by using ecologically valid paradigms to investigate children’s action sequence planning using wearable equipment allowing toddlers and children to act like they would in everyday life. Chapter 2 shows that the ability to plan simple alternating sequences and action sequences with goal constraints improves over toddlerhood and that it is related to working memory. Next, Chapter 3 and 4 shows that the hierarchical action sequence planning of a Duplo house improved over the preschool period. Furthermore, this ability is related to updating and inhibition skills. Motion capture reveals that good planners shows relative freezing of their non-reaching hand when executing a subgoal, suggesting a greater cognitive focus while executing that subgoal. Chapter 4 uses fNIRS to show that the dorsolateral prefrontal cortex is more active at decision branch points (when a switch from one subgoal to another has to be made), but only in older children. This suggests that changes in the dorsolateral prefrontal cortex might be involved in the development of action sequence control in the preschool period. Lastly, in Chapter 5, modelling is used to explore the novel hypothesis that action sequence development might be related to immature action selection functionality in the basal ganglia. However, the modelling results shows that, while the action selection functionality of the basal ganglia might play some role in development role, changes in goal representations and/or action planning improvements in the prefrontal cortex are the main driver of goal-directed action sequence development in preschool years. Together, these chapters enrich our understanding of the development of planning, selection, and control of action sequences, and their underlying and neural mechanisms in toddlerhood and the preschool period in naturalistic settings
    corecore