6 research outputs found

    Virtual and Mixed Reality in Telerobotics: A Survey

    Get PDF

    Model Driven Robotic Assistance for Human-Robot Collaboration

    Get PDF
    While robots routinely perform complex assembly tasks in highly structured factory environments, it is challenging to apply completely autonomous robotic systems in less structured manipulation tasks, such as surgery and machine assembly/repair, due to the limitations of machine intelligence, sensor data interpretation and environment modeling. A practical, yet effective approach to accomplish these tasks is through human-robot collaboration, in which the human operator and the robot form a partnership and complement each other in performing a complex task. We recognize that humans excel at determining task goals and recognizing constraints, if given sufficient feedback about the interaction between the tool (e.g., end-effector of the robot) and the environment. Robots are precise, unaffected by fatigue and able to work in environments not suitable for humans. We hypothesize that by providing the operator with adequate information about the task, through visual and force (haptic) feedback, the operator can: (1) define the task model, in terms of task goals and virtual fixture constraints through an interactive, or immersive augmented reality interface, and (2) have the robot actively assist the operator to enhance the execution time, quality and precision of the tasks. We validate our approaches through the implementations of both cooperative (i.e., hands-on) control and telerobotic systems, for image-guided robotic neurosurgery and telerobotic manipulation tasks for satellite servicing under significant time delay

    Evaluation of Haptic and Visual Cues for Repulsive or Attractive Guidance in Nonholonomic Steering Tasks.

    Get PDF
    Remote control of vehicles is a difficult task for operators. Support systems that present additional task information may assist operators, but their usefulness is expected to depend on several factors such as 1) the nature of conveyed information, 2) what modality it is conveyed through, and 3) the task difficulty. In an exploratory experiment, these three factors were manipulated to quantify their effects on operator behavior. Subjects ( n=15n = {{15}}) used a haptic manipulator to steer a virtual nonholonomic vehicle through abstract environments, in which obstacles needed to be avoided. Both a simple support conveying near-future predictions of the trajectory of the vehicle and a more elaborate support that continuously suggests the path to be taken were designed (factor 1). These types of information were offered either with visual or haptic cues (factor 2). These four support systems were tested in four different abstracted environments with decreasing amount of allowed variability in realized trajectories (factor 3). The results show improvements for the simple support only when this information was presented visually, but not when offered haptically. For the elaborate support, equally large improvements for both modalities were found. This suggests that the elaborate support is better: additional information is key in improving performance in nonholonomic steering tasks

    Progressively communicating rich telemetry from autonomous underwater vehicles via relays

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2012As analysis of imagery and environmental data plays a greater role in mission construction and execution, there is an increasing need for autonomous marine vehicles to transmit this data to the surface. Without access to the data acquired by a vehicle, surface operators cannot fully understand the state of the mission. Communicating imagery and high-resolution sensor readings to surface observers remains a significant challenge – as a result, current telemetry from free-roaming autonomous marine vehicles remains limited to ‘heartbeat’ status messages, with minimal scientific data available until after recovery. Increasing the challenge, longdistance communication may require relaying data across multiple acoustic hops between vehicles, yet fixed infrastructure is not always appropriate or possible. In this thesis I present an analysis of the unique considerations facing telemetry systems for free-roaming Autonomous Underwater Vehicles (AUVs) used in exploration. These considerations include high-cost vehicle nodes with persistent storage and significant computation capabilities, combined with human surface operators monitoring each node. I then propose mechanisms for interactive, progressive communication of data across multiple acoustic hops. These mechanisms include wavelet-based embedded coding methods, and a novel image compression scheme based on texture classification and synthesis. The specific characteristics of underwater communication channels, including high latency, intermittent communication, the lack of instantaneous end-to-end connectivity, and a broadcast medium, inform these proposals. Human feedback is incorporated by allowing operators to identify segments of data thatwarrant higher quality refinement, ensuring efficient use of limited throughput. I then analyze the performance of these mechanisms relative to current practices. Finally, I present CAPTURE, a telemetry architecture that builds on this analysis. CAPTURE draws on advances in compression and delay tolerant networking to enable progressive transmission of scientific data, including imagery, across multiple acoustic hops. In concert with a physical layer, CAPTURE provides an endto- end networking solution for communicating science data from autonomous marine vehicles. Automatically selected imagery, sonar, and time-series sensor data are progressively transmitted across multiple hops to surface operators. Human operators can request arbitrarily high-quality refinement of any resource, up to an error-free reconstruction. The components of this system are then demonstrated through three field trials in diverse environments on SeaBED, OceanServer and Bluefin AUVs, each in different software architectures.Thanks to the National Science Foundation, and the National Oceanic and Atmospheric Administration for their funding of my education and this work

    Study of Mobile Robot Operations Related to Lunar Exploration

    Get PDF
    Mobile robots extend the reach of exploration in environments unsuitable, or unreachable, by humans. Far-reaching environments, such as the south lunar pole, exhibit lighting conditions that are challenging for optical imagery required for mobile robot navigation. Terrain conditions also impact the operation of mobile robots; distinguishing terrain types prior to physical contact can improve hazard avoidance. This thesis presents the conclusions of a trade-off that uses the results from two studies related to operating mobile robots at the lunar south pole. The lunar south pole presents engineering design challenges for both tele-operation and lidar-based autonomous navigation in the context of a near-term, low-cost, short-duration lunar prospecting mission. The conclusion is that direct-drive tele-operation may result in improved science data return. The first study is on demonstrating lidar reflectance intensity, and near-infrared spectroscopy, can improve terrain classification over optical imagery alone. Two classification techniques, Naive Bayes and multi-class SVM, were compared for classification errors. Eight terrain types, including aggregate, loose sand and compacted sand, are classified using wavelet-transformed optical images, and statistical values of lidar reflectance intensity. The addition of lidar reflectance intensity was shown to reduce classification errors for both classifiers. Four types of aggregate material are classified using statistical values of spectral reflectance. The addition of spectral reflectance was shown to reduce classification errors for both classifiers. The second study is on human performance in tele-operating a mobile robot over time-delay and in lighting conditions analogous to the south lunar pole. Round-trip time delay between operator and mobile robot leads to an increase in time to turn the mobile robot around obstacles or corners as operators tend to implement a `wait and see\u27 approach. A study on completion time for a cornering task through varying corridor widths shows that time-delayed performance fits a previously established cornering law, and that varying lighting conditions did not adversely affect human performance. The results of the cornering law are interpreted to quantify the additional time required to negotiate a corner under differing conditions, and this increase in time can be interpreted to be predictive when operating a mobile robot through a driving circuit

    Developing a Holonomic iROV as a Tool for Kelp Bed Mapping

    Get PDF
    corecore