2,055 research outputs found

    Reflections on security options for the real-time transport protocol framework

    Get PDF
    The Real-time Transport Protocol (RTP) supports a range of video conferencing, telephony, and streaming video ap- plications, but offers few native security features. We discuss the problem of securing RTP, considering the range of applications. We outline why this makes RTP a difficult protocol to secure, and describe the approach we have recently proposed in the IETF to provide security for RTP applications. This approach treats RTP as a framework with a set of extensible security building blocks, and prescribes mandatory-to-implement security at the level of different application classes, rather than at the level of the media transport protocol

    Securing the RTP framework: why RTP does not mandate a single media security solution

    Get PDF
    This memo discusses the problem of securing real-time multimedia sessions, and explains why the Real-time Transport Protocol (RTP), and the associated RTP control protocol (RTCP), do not mandate a single media security mechanism. Guidelines for designers and reviewers of future RTP extensions are provided, to ensure that appropriate security mechanisms are mandated, and that any such mechanisms are specified in a manner that conforms with the RTP architecture

    A framework for realistic 3D tele-immersion

    Get PDF
    Meeting, socializing and conversing online with a group of people using teleconferencing systems is still quite differ- ent from the experience of meeting face to face. We are abruptly aware that we are online and that the people we are engaging with are not in close proximity. Analogous to how talking on the telephone does not replicate the experi- ence of talking in person. Several causes for these differences have been identified and we propose inspiring and innova- tive solutions to these hurdles in attempt to provide a more realistic, believable and engaging online conversational expe- rience. We present the distributed and scalable framework REVERIE that provides a balanced mix of these solutions. Applications build on top of the REVERIE framework will be able to provide interactive, immersive, photo-realistic ex- periences to a multitude of users that for them will feel much more similar to having face to face meetings than the expe- rience offered by conventional teleconferencing systems

    Efficient 3D Reconstruction, Streaming and Visualization of Static and Dynamic Scene Parts for Multi-client Live-telepresence in Large-scale Environments

    Full text link
    Despite the impressive progress of telepresence systems for room-scale scenes with static and dynamic scene entities, expanding their capabilities to scenarios with larger dynamic environments beyond a fixed size of a few square-meters remains challenging. In this paper, we aim at sharing 3D live-telepresence experiences in large-scale environments beyond room scale with both static and dynamic scene entities at practical bandwidth requirements only based on light-weight scene capture with a single moving consumer-grade RGB-D camera. To this end, we present a system which is built upon a novel hybrid volumetric scene representation in terms of the combination of a voxel-based scene representation for the static contents, that not only stores the reconstructed surface geometry but also contains information about the object semantics as well as their accumulated dynamic movement over time, and a point-cloud-based representation for dynamic scene parts, where the respective separation from static parts is achieved based on semantic and instance information extracted for the input frames. With an independent yet simultaneous streaming of both static and dynamic content, where we seamlessly integrate potentially moving but currently static scene entities in the static model until they are becoming dynamic again, as well as the fusion of static and dynamic data at the remote client, our system is able to achieve VR-based live-telepresence at close to real-time rates. Our evaluation demonstrates the potential of our novel approach in terms of visual quality, performance, and ablation studies regarding involved design choices

    Real-time WebRTC-based design for a telepresence wheelchair

    Full text link
    © 2017 IEEE. This paper presents a novel approach to the telepresence wheelchair system which is capable of real-time video communication and remote interaction. The investigation of this emerging technology aims at providing a low-cost and efficient way for assisted-living of people with disabilities. The proposed system has been designed and developed by deploying the JavaScript with Hyper Text Markup Language 5 (HTML5) and Web Real-time Communication (WebRTC) in which the adaptive rate control algorithm for video transmission is invoked. We conducted experiments in real-world environments, and the wheelchair was controlled from a distance using the Internet browser to compare with existing methods. The results show that the adaptively encoded video streaming rate matches the available bandwidth. The video streaming is high-quality with approximately 30 frames per second (fps) and round trip time less than 20 milliseconds (ms). These performance results confirm that the WebRTC approach is a potential method for developing a telepresence wheelchair system

    Wearable performance

    Get PDF
    This is the post-print version of the article. The official published version can be accessed from the link below - Copyright @ 2009 Taylor & FrancisWearable computing devices worn on the body provide the potential for digital interaction in the world. A new stage of computing technology at the beginning of the 21st Century links the personal and the pervasive through mobile wearables. The convergence between the miniaturisation of microchips (nanotechnology), intelligent textile or interfacial materials production, advances in biotechnology and the growth of wireless, ubiquitous computing emphasises not only mobility but integration into clothing or the human body. In artistic contexts one expects such integrated wearable devices to have the two-way function of interface instruments (e.g. sensor data acquisition and exchange) worn for particular purposes, either for communication with the environment or various aesthetic and compositional expressions. 'Wearable performance' briefly surveys the context for wearables in the performance arts and distinguishes display and performative/interfacial garments. It then focuses on the authors' experiments with 'design in motion' and digital performance, examining prototyping at the DAP-Lab which involves transdisciplinary convergences between fashion and dance, interactive system architecture, electronic textiles, wearable technologies and digital animation. The concept of an 'evolving' garment design that is materialised (mobilised) in live performance between partners originates from DAP Lab's work with telepresence and distributed media addressing the 'connective tissues' and 'wearabilities' of projected bodies through a study of shared embodiment and perception/proprioception in the wearer (tactile sensory processing). Such notions of wearability are applied both to the immediate sensory processing on the performer's body and to the processing of the responsive, animate environment. Wearable computing devices worn on the body provide the potential for digital interaction in the world. A new stage of computing technology at the beginning of the 21st Century links the personal and the pervasive through mobile wearables. The convergence between the miniaturisation of microchips (nanotechnology), intelligent textile or interfacial materials production, advances in biotechnology and the growth of wireless, ubiquitous computing emphasises not only mobility but integration into clothing or the human body. In artistic contexts one expects such integrated wearable devices to have the two-way function of interface instruments (e.g. sensor data acquisition and exchange) worn for particular purposes, either for communication with the environment or various aesthetic and compositional expressions. 'Wearable performance' briefly surveys the context for wearables in the performance arts and distinguishes display and performative/interfacial garments. It then focuses on the authors' experiments with 'design in motion' and digital performance, examining prototyping at the DAP-Lab which involves transdisciplinary convergences between fashion and dance, interactive system architecture, electronic textiles, wearable technologies and digital animation. The concept of an 'evolving' garment design that is materialised (mobilised) in live performance between partners originates from DAP Lab's work with telepresence and distributed media addressing the 'connective tissues' and 'wearabilities' of projected bodies through a study of shared embodiment and perception/proprioception in the wearer (tactile sensory processing). Such notions of wearability are applied both to the immediate sensory processing on the performer's body and to the processing of the responsive, animate environment

    Design of a Head Movement Navigation System for Mobile Telepresence Robot Using Open-source Electronics Software and Hardware

    Get PDF
    Head movement is frequently associated with human motion navigation, and an indispensable aspect of how humans interact with the surrounding environment. In spite of that, the incorporation of head motion and navigation is more often used in the VR (Virtual Reality) environment than the physical environment. This study aims to develop a robot car capable of simple teleoperation, incorporated with telepresence and head movement control for an on-robot real-time head motion mimicking mechanism and directional control, in attempt to provide users the experience of an avatar-like third person’s point of view amid the physical environment. The design consists of three processes running in parallel; Motion JPEG (MJPEG) live streaming to html-Site via local server, Bluetooth communication, and the corresponding movements for the head motion mimicking mechanism and motors which acts in accordance to head motion as captured by the Attitude Sensor and apparent command issued by the user. The design serves its purpose of demonstration with the usage of basic components and is not aimed to provide nor research with regards to user experience

    Brain-Computer Interface meets ROS: A robotic approach to mentally drive telepresence robots

    Get PDF
    This paper shows and evaluates a novel approach to integrate a non-invasive Brain-Computer Interface (BCI) with the Robot Operating System (ROS) to mentally drive a telepresence robot. Controlling a mobile device by using human brain signals might improve the quality of life of people suffering from severe physical disabilities or elderly people who cannot move anymore. Thus, the BCI user is able to actively interact with relatives and friends located in different rooms thanks to a video streaming connection to the robot. To facilitate the control of the robot via BCI, we explore new ROS-based algorithms for navigation and obstacle avoidance, making the system safer and more reliable. In this regard, the robot can exploit two maps of the environment, one for localization and one for navigation, and both can be used also by the BCI user to watch the position of the robot while it is moving. As demonstrated by the experimental results, the user's cognitive workload is reduced, decreasing the number of commands necessary to complete the task and helping him/her to keep attention for longer periods of time.Comment: Accepted in the Proceedings of the 2018 IEEE International Conference on Robotics and Automatio
    • 

    corecore