1,063 research outputs found

    Mobile robotic teleguide based on video images

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.” DOI: 10.1109/MRA.2008.929927Peer reviewe

    High latency unmanned ground vehicle teleoperation enhancement by presentation of estimated future through video transformation

    Get PDF
    Long-distance, high latency teleoperation tasks are difficult, highly stressful for teleoperators, and prone to over-corrections, which can lead to loss of control. At higher latencies, or when teleoperating at higher vehicle speed, the situation becomes progressively worse. To explore potential solutions, this research work investigates two 2D visual feedback-based assistive interfaces (sliding-only and sliding-and-zooming windows) that apply simple but effective video transformations to enhance teleoperation. A teleoperation simulator that can replicate teleoperation scenarios affected by high and adjustable latency has been developed to explore the effectiveness of the proposed assistive interfaces. Three image comparison metrics have been used to fine-tune and optimise the proposed interfaces. An operator survey was conducted to evaluate and compare performance with and without the assistance. The survey has shown that a 900ms latency increases task completion time by up to 205% for an on-road and 147 % for an off-road driving track. Further, the overcorrection-induced oscillations increase by up to 718 % with this level of latency. The survey has shown the sliding-only video transformation reduces the task completion time by up to 25.53 %, and the sliding-and-zooming transformation reduces the task completion time by up to 21.82 %. The sliding-only interface reduces the oscillation count by up to 66.28 %, and the sliding-and-zooming interface reduces it by up to 75.58 %. The qualitative feedback from the participants also shows that both types of assistive interfaces offer better visual situational awareness, comfort, and controllability, and significantly reduce the impact of latency and intermittency on the teleoperation task

    Hierarchical Salient Object Detection for Assisted Grasping

    Full text link
    Visual scene decomposition into semantic entities is one of the major challenges when creating a reliable object grasping system. Recently, we introduced a bottom-up hierarchical clustering approach which is able to segment objects and parts in a scene. In this paper, we introduce a transform from such a segmentation into a corresponding, hierarchical saliency function. In comprehensive experiments we demonstrate its ability to detect salient objects in a scene. Furthermore, this hierarchical saliency defines a most salient corresponding region (scale) for every point in an image. Based on this, an easy-to-use pick and place manipulation system was developed and tested exemplarily.Comment: Accepted for ICRA 201

    Stereo Viewing and Virtual Reality Technologies in Mobile Robot Teleguide

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.” DOI: 10.1109/TRO.2009.2028765The use of 3-D stereoscopic visualization may provide a user with higher comprehension of remote environments in teleoperation when compared with 2-D viewing, in particular, a higher perception of environment depth characteristics, spatial localization, remote ambient layout, faster system learning, and decision performance. Works in the paper have demonstrated how stereo vision contributes to the improvement of the perception of some depth cues, often for abstract tasks, while it is hard to find works addressing stereoscopic visualization in mobile robot teleguide applications. This paper intends to contribute to this aspect by investigating the stereoscopic robot teleguide under different conditions, including typical navigation scenarios and the use of synthetic and real images. This paper also investigates how user performance may vary when employing different display technologies. Results from a set of test trials run on seven virtual reality systems, from laptop to large panorama and from head-mounted display to Cave automatic virtual environment (CAVE), emphasized few aspects that represent a base for further investigations as well as a guide when designing specific systems for telepresence.Peer reviewe

    NASA space station automation: AI-based technology review. Executive summary

    Get PDF
    Research and Development projects in automation technology for the Space Station are described. Artificial Intelligence (AI) based technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics
    • …
    corecore