31 research outputs found

    Robot manipulation in human environments

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 211-228).Human environments present special challenges for robot manipulation. They are often dynamic, difficult to predict, and beyond the control of a robot engineer. Fortunately, many characteristics of these settings can be used to a robot's advantage. Human environments are typically populated by people, and a robot can rely on the guidance and assistance of a human collaborator. Everyday objects exhibit common, task-relevant features that reduce the cognitive load required for the object's use. Many tasks can be achieved through the detection and control of these sparse perceptual features. And finally, a robot is more than a passive observer of the world. It can use its body to reduce its perceptual uncertainty about the world. In this thesis we present advances in robot manipulation that address the unique challenges of human environments. We describe the design of a humanoid robot named Domo, develop methods that allow Domo to assist a person in everyday tasks, and discuss general strategies for building robots that work alongside people in their homes and workplaces.by Aaron Ladd Edsinger.Ph.D

    Visual Servoing

    Get PDF
    The goal of this book is to introduce the visional application by excellent researchers in the world currently and offer the knowledge that can also be applied to another field widely. This book collects the main studies about machine vision currently in the world, and has a powerful persuasion in the applications employed in the machine vision. The contents, which demonstrate that the machine vision theory, are realized in different field. For the beginner, it is easy to understand the development in the vision servoing. For engineer, professor and researcher, they can study and learn the chapters, and then employ another application method

    The Sixth Annual Workshop on Space Operations Applications and Research (SOAR 1992)

    Get PDF
    This document contains papers presented at the Space Operations, Applications, and Research Symposium (SOAR) hosted by the U.S. Air Force (USAF) on 4-6 Aug. 1992 and held at the JSC Gilruth Recreation Center. The symposium was cosponsored by the Air Force Material Command and by NASA/JSC. Key technical areas covered during the symposium were robotic and telepresence, automation and intelligent systems, human factors, life sciences, and space maintenance and servicing. The SOAR differed from most other conferences in that it was concerned with Government-sponsored research and development relevant to aerospace operations. The symposium's proceedings include papers covering various disciplines presented by experts from NASA, the USAF, universities, and industry

    Shuttle mission simulator baseline definition report, volume 1

    Get PDF
    A baseline definition of the space shuttle mission simulator is presented. The subjects discussed are: (1) physical arrangement of the complete simulator system in the appropriate facility, with a definition of the required facility modifications, (2) functional descriptions of all hardware units, including the operational features, data demands, and facility interfaces, (3) hardware features necessary to integrate the items into a baseline simulator system to include the rationale for selecting the chosen implementation, and (4) operating, maintenance, and configuration updating characteristics of the simulator hardware

    3D Multimodal Interaction with Physically-based Virtual Environments

    Get PDF
    The virtual has become a huge field of exploration for researchers: it could assist the surgeon, help the prototyping of industrial objects, simulate natural phenomena, be a fantastic time machine or entertain users through games or movies. Far beyond the only visual rendering of the virtual environment, the Virtual Reality aims at -literally- immersing the user in the virtual world. VR technologies simulate digital environments with which users can interact and, as a result, perceive through different modalities the effects of their actions in real time. The challenges are huge: the user's motions need to be perceived and to have an immediate impact on the virtual world by modifying the objects in real-time. In addition, the targeted immersion of the user is not only visual: auditory or haptic feedback needs to be taken into account, merging all the sensory modalities of the user into a multimodal answer. The global objective of my research activities is to improve 3D interaction with complex virtual environments by proposing novel approaches for physically-based and multimodal interaction. I have laid the foundations of my work on designing the interactions with complex virtual worlds, referring to a higher demand in the characteristics of the virtual environments. My research could be described within three main research axes inherent to the 3D interaction loop: (1) the physically-based modeling of the virtual world to take into account the complexity of the virtual object behavior, their topology modifications as well as their interactions, (2) the multimodal feedback for combining the sensory modalities into a global answer from the virtual world to the user and (3) the design of body-based 3D interaction techniques and devices for establishing the interfaces between the user and the virtual world. All these contributions could be gathered in a general framework within the 3D interaction loop. By improving all the components of this framework, I aim at proposing approaches that could be used in future virtual reality applications but also more generally in other areas such as medical simulation, gesture training, robotics, virtual prototyping for the industry or web contents.Le virtuel est devenu un vaste champ d'exploration pour la recherche et offre de nos jours de nombreuses possibilités : assister le chirurgien, réaliser des prototypes de pièces industrielles, simuler des phénomènes naturels, remonter dans le temps ou proposer des applications ludiques aux utilisateurs au travers de jeux ou de films. Bien plus que le rendu purement visuel d'environnement virtuel, la réalité virtuelle aspire à -littéralement- immerger l'utilisateur dans le monde virtuel. L'utilisateur peut ainsi interagir avec le contenu numérique et percevoir les effets de ses actions au travers de différents retours sensoriels. Permettre une véritable immersion de l'utilisateur dans des environnements virtuels de plus en plus complexes confronte la recherche en réalité virtuelle à des défis importants: les gestes de l'utilisateur doivent être capturés puis directement transmis au monde virtuel afin de le modifier en temps-réel. Les retours sensoriels ne sont pas uniquement visuels mais doivent être combinés avec les retours auditifs ou haptiques dans une réponse globale multimodale. L'objectif principal de mes activités de recherche consiste à améliorer l'interaction 3D avec des environnements virtuels complexes en proposant de nouvelles approches utilisant la simulation physique et exploitant au mieux les différentes modalités sensorielles. Dans mes travaux, je m'intéresse tout particulièrement à concevoir des interactions avec des mondes virtuels complexes. Mon approche peut être décrite au travers de trois axes principaux de recherche: (1) la modélisation dans les mondes virtuels d'environnements physiques plausibles où les objets réagissent de manière naturelle, même lorsque leur topologie est modifiée ou lorsqu'ils sont en interaction avec d'autres objets, (2) la mise en place de retours sensoriels multimodaux vers l'utilisateur intégrant des composantes visuelles, haptiques et/ou sonores, (3) la prise en compte de l'interaction physique de l'utilisateur avec le monde virtuel dans toute sa richesse : mouvements de la tête, des deux mains, des doigts, des jambes, voire de tout le corps, en concevant de nouveaux dispositifs ou de nouvelles techniques d'interactions 3D. Les différentes contributions que j'ai proposées dans chacun de ces trois axes peuvent être regroupées au sein d'un cadre plus général englobant toute la boucle d'interaction 3D avec les environnements virtuels. Elles ouvrent des perspectives pour de futures applications en réalité virtuelle mais également plus généralement dans d'autres domaines tels que la simulation médicale, l'apprentissage de gestes, la robotique, le prototypage virtuel pour l'industrie ou bien les contenus web

    3D Multimodal Interaction with Physically-based Virtual Environments

    Get PDF
    The virtual has become a huge field of exploration for researchers: it could assist the surgeon, help the prototyping of industrial objects, simulate natural phenomena, be a fantastic time machine or entertain users through games or movies. Far beyond the only visual rendering of the virtual environment, the Virtual Reality aims at -literally- immersing the user in the virtual world. VR technologies simulate digital environments with which users can interact and, as a result, perceive through different modalities the effects of their actions in real time. The challenges are huge: the user's motions need to be perceived and to have an immediate impact on the virtual world by modifying the objects in real-time. In addition, the targeted immersion of the user is not only visual: auditory or haptic feedback needs to be taken into account, merging all the sensory modalities of the user into a multimodal answer. The global objective of my research activities is to improve 3D interaction with complex virtual environments by proposing novel approaches for physically-based and multimodal interaction. I have laid the foundations of my work on designing the interactions with complex virtual worlds, referring to a higher demand in the characteristics of the virtual environments. My research could be described within three main research axes inherent to the 3D interaction loop: (1) the physically-based modeling of the virtual world to take into account the complexity of the virtual object behavior, their topology modifications as well as their interactions, (2) the multimodal feedback for combining the sensory modalities into a global answer from the virtual world to the user and (3) the design of body-based 3D interaction techniques and devices for establishing the interfaces between the user and the virtual world. All these contributions could be gathered in a general framework within the 3D interaction loop. By improving all the components of this framework, I aim at proposing approaches that could be used in future virtual reality applications but also more generally in other areas such as medical simulation, gesture training, robotics, virtual prototyping for the industry or web contents.Le virtuel est devenu un vaste champ d'exploration pour la recherche et offre de nos jours de nombreuses possibilités : assister le chirurgien, réaliser des prototypes de pièces industrielles, simuler des phénomènes naturels, remonter dans le temps ou proposer des applications ludiques aux utilisateurs au travers de jeux ou de films. Bien plus que le rendu purement visuel d'environnement virtuel, la réalité virtuelle aspire à -littéralement- immerger l'utilisateur dans le monde virtuel. L'utilisateur peut ainsi interagir avec le contenu numérique et percevoir les effets de ses actions au travers de différents retours sensoriels. Permettre une véritable immersion de l'utilisateur dans des environnements virtuels de plus en plus complexes confronte la recherche en réalité virtuelle à des défis importants: les gestes de l'utilisateur doivent être capturés puis directement transmis au monde virtuel afin de le modifier en temps-réel. Les retours sensoriels ne sont pas uniquement visuels mais doivent être combinés avec les retours auditifs ou haptiques dans une réponse globale multimodale. L'objectif principal de mes activités de recherche consiste à améliorer l'interaction 3D avec des environnements virtuels complexes en proposant de nouvelles approches utilisant la simulation physique et exploitant au mieux les différentes modalités sensorielles. Dans mes travaux, je m'intéresse tout particulièrement à concevoir des interactions avec des mondes virtuels complexes. Mon approche peut être décrite au travers de trois axes principaux de recherche: (1) la modélisation dans les mondes virtuels d'environnements physiques plausibles où les objets réagissent de manière naturelle, même lorsque leur topologie est modifiée ou lorsqu'ils sont en interaction avec d'autres objets, (2) la mise en place de retours sensoriels multimodaux vers l'utilisateur intégrant des composantes visuelles, haptiques et/ou sonores, (3) la prise en compte de l'interaction physique de l'utilisateur avec le monde virtuel dans toute sa richesse : mouvements de la tête, des deux mains, des doigts, des jambes, voire de tout le corps, en concevant de nouveaux dispositifs ou de nouvelles techniques d'interactions 3D. Les différentes contributions que j'ai proposées dans chacun de ces trois axes peuvent être regroupées au sein d'un cadre plus général englobant toute la boucle d'interaction 3D avec les environnements virtuels. Elles ouvrent des perspectives pour de futures applications en réalité virtuelle mais également plus généralement dans d'autres domaines tels que la simulation médicale, l'apprentissage de gestes, la robotique, le prototypage virtuel pour l'industrie ou bien les contenus web

    A layered control architecture for mobile robot navigation

    Get PDF
    A Thesis submitted to the University Research Degree Committee in fulfillment ofthe requirements for the degree of DOCTOR OF PHILOSOPHY in RoboticsThis thesis addresses the problem of how to control an autonomous mobile robot navigation in indoor environments, in the face of sensor noise, imprecise information, uncertainty and limited response time. The thesis argues that the effective control of autonomous mobile robots can be achieved by organising low level and higher level control activities into a layered architecture. The low level reactive control allows the robot to respond to contingencies quickly. The higher level control allows the robot to make longer term decisions and arranges appropriate sequences for a task execution. The thesis describes the design and implementation of a two layer control architecture, a task template based sequencing layer and a fuzzy behaviour based low level control layer. The sequencing layer works at the pace of the higher level of abstraction, interprets a task plan, mediates and monitors the controlling activities. While the low level performs fast computation in response to dynamic changes in the real world and carries out robust control under uncertainty. The organisation and fusion of fuzzy behaviours are described extensively for the construction of a low level control system. A learning methodology is also developed to systematically learn fuzzy behaviours and the behaviour selection network and therefore solve the difficulties in configuring the low level control layer. A two layer control system has been implemented and used to control a simulated mobile robot performing two tasks in simulated indoor environments. The effectiveness of the layered control and learning methodology is demonstrated through the traces of controlling activities at the two different levels. The results also show a general design methodology that the high level should be used to guide the robot's actions while the low level takes care of detailed control in the face of sensor noise and environment uncertainty in real time

    Auswertung von 2D und 3D unstrukturierten Daten für die Objekt- und Lageerkennung

    Get PDF
    corecore