2,930 research outputs found

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures

    An Immersive Telepresence System using RGB-D Sensors and Head Mounted Display

    Get PDF
    We present a tele-immersive system that enables people to interact with each other in a virtual world using body gestures in addition to verbal communication. Beyond the obvious applications, including general online conversations and gaming, we hypothesize that our proposed system would be particularly beneficial to education by offering rich visual contents and interactivity. One distinct feature is the integration of egocentric pose recognition that allows participants to use their gestures to demonstrate and manipulate virtual objects simultaneously. This functionality enables the instructor to ef- fectively and efficiently explain and illustrate complex concepts or sophisticated problems in an intuitive manner. The highly interactive and flexible environment can capture and sustain more student attention than the traditional classroom setting and, thus, delivers a compelling experience to the students. Our main focus here is to investigate possible solutions for the system design and implementation and devise strategies for fast, efficient computation suitable for visual data processing and network transmission. We describe the technique and experiments in details and provide quantitative performance results, demonstrating our system can be run comfortably and reliably for different application scenarios. Our preliminary results are promising and demonstrate the potential for more compelling directions in cyberlearning.Comment: IEEE International Symposium on Multimedia 201

    Design of a Head Movement Navigation System for Mobile Telepresence Robot Using Open-source Electronics Software and Hardware

    Get PDF
    Head movement is frequently associated with human motion navigation, and an indispensable aspect of how humans interact with the surrounding environment. In spite of that, the incorporation of head motion and navigation is more often used in the VR (Virtual Reality) environment than the physical environment. This study aims to develop a robot car capable of simple teleoperation, incorporated with telepresence and head movement control for an on-robot real-time head motion mimicking mechanism and directional control, in attempt to provide users the experience of an avatar-like third person’s point of view amid the physical environment. The design consists of three processes running in parallel; Motion JPEG (MJPEG) live streaming to html-Site via local server, Bluetooth communication, and the corresponding movements for the head motion mimicking mechanism and motors which acts in accordance to head motion as captured by the Attitude Sensor and apparent command issued by the user. The design serves its purpose of demonstration with the usage of basic components and is not aimed to provide nor research with regards to user experience

    Robotics and augmented reality for elderly assistance

    Get PDF
    This article presents a tele-assistance system based on augmented reality for elderly people that is integrated in a mobile platform. We propose the use of augmented reality for simplifying interaction with its users. The first prototype has been designed to help in medication control for ederly people. In this paper, both hardware and software architectures are described.The robotic platform is a slightly modified version of the Turtlebot platform. The software is based on ROS for the platform control, and in ArUco for the augmented reality interface. It also integrates other related systems in teleassistance such as VoIP, friendly user interface, etc

    An Action-Based Approach to Presence: Foundations and Methods

    Get PDF
    This chapter presents an action-based approach to presence. It starts by briefly describing the theoretical and empirical foundations of this approach, formalized into three key notions of place/space, action and mediation. In the light of these notions, some common assumptions about presence are then questioned: assuming a neat distinction between virtual and real environments, taking for granted the contours of the mediated environment and considering presence as a purely personal state. Some possible research topics opened up by adopting action as a unit of analysis are illustrated. Finally, a case study on driving as a form of mediated presence is discussed, to provocatively illustrate the flexibility of this approach as a unified framework for presence in digital and physical environment

    Physical Telepresence: Shape Capture and Display for Embodied, Computer-mediated Remote Collaboration

    Get PDF
    We propose a new approach to Physical Telepresence, based on shared workspaces with the ability to capture and remotely render the shapes of people and objects. In this paper, we describe the concept of shape transmission, and propose interaction techniques to manipulate remote physical objects and physical renderings of shared digital content. We investigate how the representation of user's body parts can be altered to amplify their capabilities for teleoperation. We also describe the details of building and testing prototype Physical Telepresence workspaces based on shape displays. A preliminary evaluation shows how users are able to manipulate remote objects, and we report on our observations of several different manipulation techniques that highlight the expressive nature of our system.National Science Foundation (U.S.). Graduate Research Fellowship Program (Grant No. 1122374

    Automation and robotics for the Space Exploration Initiative: Results from Project Outreach

    Get PDF
    A total of 52 submissions were received in the Automation and Robotics (A&R) area during Project Outreach. About half of the submissions (24) contained concepts that were judged to have high utility for the Space Exploration Initiative (SEI) and were analyzed further by the robotics panel. These 24 submissions are analyzed here. Three types of robots were proposed in the high scoring submissions: structured task robots (STRs), teleoperated robots (TORs), and surface exploration robots. Several advanced TOR control interface technologies were proposed in the submissions. Many A&R concepts or potential standards were presented or alluded to by the submitters, but few specific technologies or systems were suggested

    Efficient 3D Reconstruction, Streaming and Visualization of Static and Dynamic Scene Parts for Multi-client Live-telepresence in Large-scale Environments

    Full text link
    Despite the impressive progress of telepresence systems for room-scale scenes with static and dynamic scene entities, expanding their capabilities to scenarios with larger dynamic environments beyond a fixed size of a few square-meters remains challenging. In this paper, we aim at sharing 3D live-telepresence experiences in large-scale environments beyond room scale with both static and dynamic scene entities at practical bandwidth requirements only based on light-weight scene capture with a single moving consumer-grade RGB-D camera. To this end, we present a system which is built upon a novel hybrid volumetric scene representation in terms of the combination of a voxel-based scene representation for the static contents, that not only stores the reconstructed surface geometry but also contains information about the object semantics as well as their accumulated dynamic movement over time, and a point-cloud-based representation for dynamic scene parts, where the respective separation from static parts is achieved based on semantic and instance information extracted for the input frames. With an independent yet simultaneous streaming of both static and dynamic content, where we seamlessly integrate potentially moving but currently static scene entities in the static model until they are becoming dynamic again, as well as the fusion of static and dynamic data at the remote client, our system is able to achieve VR-based live-telepresence at close to real-time rates. Our evaluation demonstrates the potential of our novel approach in terms of visual quality, performance, and ablation studies regarding involved design choices
    corecore