28 research outputs found
RTP/I Payload Type Definition for Chat Tools
This document specifies an application-level protocol (i.e., payload type) for chat tools using the Real Time Protocol for Distributed Interactive Media (RTP/I). RTP/I defines a standardized framing for the transmission of application data and provides protocol mechanisms that are universally needed for the class of distributed interactive media. A chat tool provides an instant messaging service among an arbitrary number of users. This documents specifies how to employ a chat tool with RTP/I and defines application data units (ADUs) for chat operations. This protocol definition allows standardized collaboration between different chat implementations
RTP/I Payload Type Definition for Hand-Raising Tools
This document specifies an application-level protocol (i.e., payload type) for hand-raising tools using the Real Time Protocol for Distributed Interactive Media (RTP/I). RTP/I defines a standardized framing for the transmission of application data and provides protocol mechanisms that are universally needed for the class of distributed interactive media. A hand-raising tool can support collaboration between spatially separated users. In a video conference, for example, a hand-raising tool can be used to coordinate different speakers. This documents specifies how to employ a hand-raising tool with RTP/I and defines application data units (ADUs) for hand-raising tool operations. This protocol definition allows standardized collaboration between different hand-raising tool implementations
RTP/I Payload Type Definition for Application Launch Tools
This document specifies an application-level protocol (i.e., payload type) for application launch tools using the Real-Time Protocol for Distributed Interactive Media (RTP/I). RTP/I defines a standardized framing for the transmission of application data and provides protocol mechanisms that are universally needed for the class of distributed interactive media. An application launch tool is used to synchronously start applications in collaborative environments, i.e., a participant can trigger the simultaneous execution of a program at all involved sites. This document specifies how to employ an application launch tool with RTP/I and defines application data units (ADUs) for application launch tool operations. This protocol definition allows standardized communication between different application launch tool implementations
RTP/I Payload Type Definition for Feedback Tools
This document specifies an application-level protocol (i.e., payload type) for feedback tools using the Real Time Protocol for Distributed Interactive Media (RTP/I). RTP/I defines a standardized framing for the transmission of application data and provides protocol mechanisms that are universally needed for the class of distributed interactive media. A feedback tool is used in synchronous collaborative environments for permanent feedback about certain criteria (e.g., audio quality). This document specifies how to employ a feedback tool with RTP/I and defines application data units (ADUs) for feedback tool operations. This protocol definition allows standardized communication between different feedback tool implementations
RTP/I Payload Type Definition for Telepointers
This document specifies an application-level protocol (i.e., payload type) for telepointers using the Real Time Protocol for Distributed Interactive Media (RTP/I). RTP/I defines a standardized framing for the transmission of application data and provides protocol mechanisms that are universally needed for the class of distributed interactive media. A telepointer creates a common point of reference in distributed (i.e., multi-user) applications by visualizing mouse movements of remote session participants. Telepointers are used in conjunction with other distributed interactive media such as shared whiteboards and distributed virtual environments. This documents specifies how to employ two-dimensional telepointers with RTP/I and defines application data units (ADUs) for telepointer operations. This protocol definition allows standardized collaboration between different telepointer implementations
Bounding inconsistency using a novel threshold metric for dead reckoning update packet generation
Human-to-human interaction across distributed applications requires that sufficient consistency be maintained among participants in the face of network characteristics such as latency and limited bandwidth. The level of inconsistency arising from the network is proportional to the network delay, and thus a function of bandwidth consumption. Distributed simulation has often used a bandwidth reduction technique known as dead reckoning that combines approximation and estimation in the communication of entity movement to reduce network traffic, and thus improve consistency. However, unless carefully tuned to application and network characteristics, such an approach can introduce more inconsistency than it avoids. The key tuning metric is the distance threshold. This paper questions the suitability of the standard distance threshold as a metric for use in the dead reckoning scheme. Using a model relating entity path curvature and inconsistency, a major performance related limitation of the distance threshold technique is highlighted. We then propose an alternative time—space threshold criterion. The time—space threshold is demonstrated, through simulation, to perform better for low curvature movement. However, it too has a limitation. Based on this, we further propose a novel hybrid scheme. Through simulation and live trials, this scheme is shown to perform well across a range of curvature values, and places bounds on both the spatial and absolute inconsistency arising from dead reckoning
Recommended from our members
Autonomic Control for Quality Collaborative Video Viewing
We present an autonomic controller for quality collaborative video viewing, which allows groups of geographically dispersed users with different network and computer resources to view a video in synchrony while optimizing the video quality experienced. The autonomic controller is used within a tool for enhancing distance learning with synchronous group review of online multimedia material. The autonomic controller monitors video state at the clients' end, and adapts the quality of the video according to the resources of each client in (soft) real time. Experimental results show that the autonomic controller successfully synchronizes video for small groups of distributed clients and, at the same time, enhances the video quality experienced by users, in conditions of fluctuating bandwidth and variable frame rate
Recommended from our members
Adaptive Synchronization of Semantically Compressed Instructional Videos for Collaborative Distance Learning
The increasing popularity of online courses has highlighted the need for collaborative learning tools for student groups. In addition, the introduction of lecture videos into the online curriculum has drawn attention to the disparity in the network resources available to students. We present an e-Learning architecture and adaptation model called AI2TV (Adaptive Interactive Internet Team Video), which allows groups of students to collaboratively view a video in synchrony. AI2TV upholds the invariant that each student will view semantically equivalent content at all times. A semantic compression model is developed to provide instructional videos at different level-of-details to accommodate dynamic network conditions and usersäó» system requirements. We take advantage of the semantic compression algorithmäó»s ability to provide different layers of semantically equivalent video by adapting the client to play at the appropriate layer that provides the client with the richest possible viewing experience. Video player actions, like play, pause and stop, can be initiated by any group member and and the results of those actions are synchronized with all the other students. These features allow students to review a lecture video in tandem, facilitating the learning process. Experimental trials show that AI2TV successfully synchronizes instructional videos for distributed students while concurrently optimizing the video quality, even under conditions of fluctuating bandwidth, by adaptively adjusting the quality level for each student while still maintaining the invariant
Recommended from our members
Optimizing Quality for Collaborative Video Viewing
The increasing popularity of distance learning and online courses has highlighted the lack of collaborative tools for student groups. In addition, the introduction of lecture videos into the online curriculum has drawn attention to the disparity in the network resources used by the students. We present an architecture and adaptation model called AI2TV (Adaptive Internet Interactive Team Video), a system that allows geographically dispersed participants, possibly some or all disadvantaged in network resources, to collaboratively view a video in synchrony. AI2TV upholds the invariant that each participant will view semantically equivalent content at all times. Video player actions, like play, pause and stop, can be initiated by any of the participants and the results of those actions are seen by all the members. These features allow group members to review a lecture video in tandem to facilitate the learning process. We employ an autonomic (feedback loop) controller that monitors clients' video status and adjusts the quality of the video according to the resources of each client. We show in experimental trials that our system can successfully synchronize video for distributed clients while, at the same time, optimizing the video quality given actual (fluctuating) bandwidth by adaptively adjusting the quality level for each participant