5 research outputs found

    Progress and Prospects of the Human-Robot Collaboration

    Get PDF
    International audienceRecent technological advances in hardware designof the robotic platforms enabled the implementationof various control modalities for improved interactions withhumans and unstructured environments. An important applicationarea for the integration of robots with such advancedinteraction capabilities is human-robot collaboration. Thisaspect represents high socio-economic impacts and maintainsthe sense of purpose of the involved people, as the robotsdo not completely replace the humans from the workprocess. The research community’s recent surge of interestin this area has been devoted to the implementation of variousmethodologies to achieve intuitive and seamless humanrobot-environment interactions by incorporating the collaborativepartners’ superior capabilities, e.g. human’s cognitiveand robot’s physical power generation capacity. In fact,the main purpose of this paper is to review the state-of-thearton intermediate human-robot interfaces (bi-directional),robot control modalities, system stability, benchmarking andrelevant use cases, and to extend views on the required futuredevelopments in the realm of human-robot collaboration

    Interface de contrôle automatisé d’un système de caméras robotisées pour la télétraumatologie

    Get PDF
    De nos jours, la télémédecine est appliquée universellement dans plusieurs domaines de la médecine comme la radiologie, la pathologie et la psychiatrie. Depuis 2004, le Centre hospitalier universitaire de Sherbrooke (CHUS), la Faculté de médecine et des sciences de la santé et la Faculté de génie de l’Université de Sherbrooke développent un système des caméras robotisées permettant à un traumatologue d’interagir à distance avec un médecin intervenant en salle d’urgence, dans un contexte de télétraumatologie. Ce système demande au traumatologue de contrôler et de positionner les caméras tout en observant l’intervention. Afin qu’il puisse se concentrer le plus possible sur l’intervention chirurgicale au lieu de s’attarder au contrôle du système, une assistance de positionnement des caméras s’avérerait utile. L’objectif de ce projet est de concevoir une interface qui permet de positionner automatiquement les caméras robotisées tout en laissant la possibilité à l’opérateur de les déplacer directement au besoin. Pour ce faire, l’interface de contrôle automatisé utilise des algorithmes de traitement d’images permettant le suivi d’éléments visuels, la détection d’obstructions dans la scène observée et l’approximation de coordonnées tridimensionnelles d’un point dans l’image. Elle exploite deux modes de contrôle : l’opérateur sélectionne une zone d’intérêt directement dans la vue vidéo, ou identifie une région d’intérêt qui est suivie automatiquement par le système, et qui permet au besoin que les deux bras regardent simultanément le dit objet de deux points de vue différents. Avec la détection d’obstructions, l’interface est en mesure de repositionner automatiquement les caméras pour garder la vue sur la zone ou la région d’intérêt. Des tests pré-cliniques menés au Laboratoire de robotique intelligente, interactive et interdisciplinaire de l’Université de Sherbrooke permettent d’évaluer l’efficacité et la pertinence de l’interface de contrôle automatisé. Malgré certaines limitations inhérentes à la rapidité de traitement des commandes de positionnement des caméras intégrées au système, l’analyse des résultats suggère que l’interface de contrôle automatisé est conviviale et diminue la charge cognitive des opérateurs. La performance avec l’interface de contrôle automatisé permettant la sélection de la zone d’intérêt est plus élevée que l’interface de contrôle nonautomatisé (dite manuel, demandant le positionnement manuel des bras robotisés et des caméras), tandis que la performance de l’interface de contrôle automatisé permettant de sélectionner et de suivre une région d’intérêt simultanément par les deux caméras est moins élevée que l’interface de contrôle manuel

    Intuitive Human-Robot Interaction by Intention Recognition

    Get PDF

    Situation Adaptation: Information Acquisition, Human Behavior and its Determining Abilities

    Full text link
    The goal of the dissertation was to acquire knowledge (1) about the way of how information in the environment is processed by an observer, (2) about the interaction between the processed information and the behavior of the observer, (3) about the role of the familiarity of the environment and (4) about the abilities determining the information acquisition and human behavior in environments, which novelty decreases. For this purpose, a theoretical basis was provided discussing a continuum of information acquisition starting from direct perception and ending with higher cognitive processes such as decision making and problem solving and a continuum of human behavior from exploratory, creative expressive behavior to direct activities, which no longer require information processing. Cognitive processes have been discussed underlying these continui, on which basis abilities of the human being and characteristics of the situation have been determined which were expected to influence the adaptation process, which is reflected by moving from the one end of the continuum requiring a high level of information processing to the other end. A study was conducted at the Evangelische Stiftung Volmarstein to test major assumptions underlying these theoretical assumptions. For this purpose, 16 physically disabled wheelchair users repeatedly executed unknown (gardening) tasks in for them unfamiliar environments. While executing these gardening tasks, their gaze behavior and their operations, actions, and activities were recorded. To test the influence of the individual differences on the adaptation process, the intelligence and the motor abilities of the participants were assessed on the basis of carefully selected tests. The hypotheses testing the cognitive processes proposed for situation adaptation for information acquisition and human behavior and testing the impact of the individual differences on the situation adaptation process have been analysed with general linear model analyses, polynomial tests and stability analyses. The results have first confirmed the theoretically discussed cognitive processes underlying the continui of information acquisition and human behavior: Initially, when being confronted with a new situation, a cognitive or internal representation of the environment is built, which enables mentally simulating different sets of actions to achieve the goal in question. For this purpose, proximal variables are perceived and exploratory behavior is executed by the human being. After having built this internal representation and after having tested different ways of achieving a goal, the information in the environment is specified, which points to an action to achieve the goal in a given situation. The gaze durations on this anchor are prolonged. On the behavioral side of the adaptation process, it is characterized by a clustering of operations to actions. When the situation is familiar, behavior and eye movements are aligned: Anticipatory behavior decreases, operation-independent gazes occur to update the internal representation, and operation-relevant gazes are executed to get feedback on the progress of the currently undertaken operation. The impact of the individual differences on the adaptation process has also been supported: The predictive validity of intelligence on the adaptation process has been demonstrated especially regarding the duration of the eye movements and the human behavior (i.e., average duration of operations); however, an influence on the actual course (e.g., the total number of gazes executed or the number of the task-related gazes) has not been revealed, except regarding the number of strategic changes performed. This number of strategic changes in human behavior is further influenced by the psychomotor abilities, which also determine the individual differences regarding the number and the duration of the operation-independent gazes. For all three effects, a theoretical, indirect effect has been proposed: It is for the participants with lower psychomotor abilities more important to determine the best solution of achieving the goal in comparison with the participants with greater ability levels. Further, the participants with greater psychomotor abilities have less time for executing (long) operation-independent gazes to update the internal representation, as they are expected to take place when the currently undertaken operation does no longer require the attention of the actor. On the basis of these results, conclusions regarding an assistance system for electrically powered wheelchairs have been drawn, which should – either on the basis of its user’s gaze behavior or on the past behavior – estimate the future intention of its user. These conclusions highlight the importance to consider individual differences in human-technology interaction and as such demonstrate that a close cooperation between the involved disciplines of engineering and psychology is necessary
    corecore