806 research outputs found
Recommended from our members
Adaptive Synchronization of Semantically Compressed Instructional Videos for Collaborative Distance Learning
The increasing popularity of online courses has highlighted the need for collaborative learning tools for student groups. In addition, the introduction of lecture videos into the online curriculum has drawn attention to the disparity in the network resources available to students. We present an e-Learning architecture and adaptation model called AI2TV (Adaptive Interactive Internet Team Video), which allows groups of students to collaboratively view a video in synchrony. AI2TV upholds the invariant that each student will view semantically equivalent content at all times. A semantic compression model is developed to provide instructional videos at different level-of-details to accommodate dynamic network conditions and usersäó» system requirements. We take advantage of the semantic compression algorithmäó»s ability to provide different layers of semantically equivalent video by adapting the client to play at the appropriate layer that provides the client with the richest possible viewing experience. Video player actions, like play, pause and stop, can be initiated by any group member and and the results of those actions are synchronized with all the other students. These features allow students to review a lecture video in tandem, facilitating the learning process. Experimental trials show that AI2TV successfully synchronizes instructional videos for distributed students while concurrently optimizing the video quality, even under conditions of fluctuating bandwidth, by adaptively adjusting the quality level for each student while still maintaining the invariant
Recommended from our members
Optimizing Quality for Collaborative Video Viewing
The increasing popularity of distance learning and online courses has highlighted the lack of collaborative tools for student groups. In addition, the introduction of lecture videos into the online curriculum has drawn attention to the disparity in the network resources used by the students. We present an architecture and adaptation model called AI2TV (Adaptive Internet Interactive Team Video), a system that allows geographically dispersed participants, possibly some or all disadvantaged in network resources, to collaboratively view a video in synchrony. AI2TV upholds the invariant that each participant will view semantically equivalent content at all times. Video player actions, like play, pause and stop, can be initiated by any of the participants and the results of those actions are seen by all the members. These features allow group members to review a lecture video in tandem to facilitate the learning process. We employ an autonomic (feedback loop) controller that monitors clients' video status and adjusts the quality of the video according to the resources of each client. We show in experimental trials that our system can successfully synchronize video for distributed clients while, at the same time, optimizing the video quality given actual (fluctuating) bandwidth by adaptively adjusting the quality level for each participant
A cognitive approach to user perception of multimedia quality: An empirical investigation
Whilst multimedia technology has been one of the main contributing factors behind the Web's success, delivery of personalized multimedia content has been a desire seldom achieved in practice. Moreover, the perspective adopted is rarely viewed from a cognitive styles standpoint, notwithstanding the fact that they have significant effects on users’ preferences with respect to the presentation of multimedia content. Indeed, research has thus far neglected to examine the effect of cognitive styles on users’ subjective perceptions of multimedia quality. This paper aims to examine the relationships between users’ cognitive styles, the multimedia quality of service delivered by the underlying network, and users’ quality of perception (understood as both enjoyment and informational assimilation) associated with the viewed multimedia content. Results from the empirical study reported here show that all users, regardless of cognitive style, have higher levels of understanding of informational content in multimedia video clips (represented in our study by excerpts from television programmes) with weak dynamism, but that they enjoy moderately dynamic clips most. Additionally, multimedia content was found to significantly influence users’ levels of understanding and enjoyment. Surprisingly, our study highlighted the fact that Bimodal users prefer to draw on visual sources for informational purposes, and that the presence of text in multimedia clips has a detrimental effect on the knowledge acquisition of all three cognitive style groups
Immersive interconnected virtual and augmented reality : a 5G and IoT perspective
Despite remarkable advances, current augmented and virtual reality (AR/VR) applications are a largely individual and local experience. Interconnected AR/VR, where participants can virtually interact across vast distances, remains a distant dream. The great barrier that stands between current technology and such applications is the stringent end-to-end latency requirement, which should not exceed 20 ms in order to avoid motion sickness and other discomforts. Bringing AR/VR to the next level to enable immersive interconnected AR/VR will require significant advances towards 5G ultra-reliable low-latency communication (URLLC) and a Tactile Internet of Things (IoT). In this article, we articulate the technical challenges to enable a future AR/VR end-to-end architecture, that combines 5G URLLC and Tactile IoT technology to support this next generation of interconnected AR/VR applications. Through the use of IoT sensors and actuators, AR/VR applications will be aware of the environmental and user context, supporting human-centric adaptations of the application logic, and lifelike interactions with the virtual environment. We present potential use cases and the required technological building blocks. For each of them, we delve into the current state of the art and challenges that need to be addressed before the dream of remote AR/VR interaction can become reality
Quality of Service Controlled Multimedia Transport Protocol
PhDThis research looks at the design of an open transport protocol that supports a range of
services including multimedia over low data-rate networks. Low data-rate multimedia
applications require a system that provides quality of service (QoS) assurance and flexibility.
One promising field is the area of content-based coding. Content-based systems use an array
of protocols to select the optimum set of coding algorithms. A content-based transport
protocol integrates a content-based application to a transmission network.
General transport protocols form a bottleneck in low data-rate multimedia
communicationbsy limiting throughpuot r by not maintainingt iming requirementsT. his work
presents an original model of a transport protocol that eliminates the bottleneck by
introducing a flexible yet efficient algorithm that uses an open approach to flexibility and
holistic architectureto promoteQ oS.T he flexibility andt ransparenccyo mesi n the form of a
fixed syntaxt hat providesa seto f transportp rotocols emanticsT. he mediaQ oSi s maintained
by defining a generic descriptor. Overall, the structure of the protocol is based on a single
adaptablea lgorithm that supportsa pplication independencen, etwork independencea nd
quality of service.
The transportp rotocol was evaluatedth rougha set of assessmentos:f f-line; off-line
for a specific application; and on-line for a specific application. Application contexts used
MPEG-4 test material where the on-line assessmenuts eda modified MPEG-4 pl; yer. The
performanceo f the QoSc ontrolledt ransportp rotocoli s often bettert hano thers chemews hen
appropriateQ oS controlledm anagemenatl gorithmsa re selectedT. his is shownf irst for an
off-line assessmenwt here the performancei s compared between the QoS controlled
multiplexer,a n emulatedM PEG-4F lexMux multiplexers chemea, ndt he targetr equirements.
The performanceis also shownt o be better in a real environmentw hen the QoS controlled
multiplexeri s comparedw ith the real MPEG-4F lexMux scheme
A Framework for Controlling Quality of Sessions in Multimedia Systems
Collaborative multimedia systems demand overall session quality control beyond the level of quality of service (QoS) pertaining to individual connections in isolation of others. At every instant in time, the quality of the session depends on the actual QoS offered by the system to each of the application streams, as well as on the relative priorities of these streams according to the application semantics. We introduce a framework for achieving QoSess control and address the architectural issues involved in designing a QoSess control laver that realizes the proposed framework. In addition, we detail our contributions for two main components of the QoSess control layer. The first component is a scalable and robust feedback protocol, which allows for determining the worst case state among a group of receivers of a stream. This mechanism is used for controlling the transmission rates of multimedia sources in both cases of layered and single-rate multicast streams. The second component is a set of inter-stream adaptation algorithms that dynamically control the bandwidth shares of the streams belonging to a session. Additionally, in order to ensure stability and responsiveness in the inter-stream adaptation process, several measures are taken, including devising a domain rate control protocol. The performance of the proposed mechanisms is analyzed and their advantages are demonstrated by simulation and experimental results
An adaptive framework for end-to-end quality of service management
Ph.DDOCTOR OF PHILOSOPH
An intelligent surveillance platform for large metropolitan areas with dense sensor deployment
Producción CientíficaThis paper presents an intelligent surveillance platform based on the usage of
large numbers of inexpensive sensors designed and developed inside the European Eureka
Celtic project HuSIMS. With the aim of maximizing the number of deployable units while
keeping monetary and resource/bandwidth costs at a minimum, the surveillance platform is
based on the usage of inexpensive visual sensors which apply efficient motion detection
and tracking algorithms to transform the video signal in a set of motion parameters. In
order to automate the analysis of the myriad of data streams generated by the visual
sensors, the platform’s control center includes an alarm detection engine which comprises
three components applying three different Artificial Intelligence strategies in parallel.
These strategies are generic, domain-independent approaches which are able to operate in
several domains (traffic surveillance, vandalism prevention, perimeter security, etc.). The
architecture is completed with a versatile communication network which facilitates data
collection from the visual sensors and alarm and video stream distribution towards the
emergency teams. The resulting surveillance system is extremely suitable for its
deployment in metropolitan areas, smart cities, and large facilities, mainly because cheap
visual sensors and autonomous alarm detection facilitate dense sensor network deployments
for wide and detailed coveraMinisterio de Industria, Turismo y Comercio and the Fondo de Desarrollo Regional (FEDER) and the Israeli Chief Scientist Research Grant 43660 inside the European Eureka Celtic project HuSIMS (TSI-020400-2010-102)
- …