4,455 research outputs found

    Dynamic adaptation of streamed real-time E-learning videos over the internet

    Get PDF
    Even though the e-learning is becoming increasingly popular in the academic environment, the quality of synchronous e-learning video is still substandard and significant work needs to be done to improve it. The improvements have to be brought about taking into considerations both: the network requirements and the psycho- physical aspects of the human visual system. One of the problems of the synchronous e-learning video is that the head-and-shoulder video of the instructor is mostly transmitted. This video presentation can be made more interesting by transmitting shots from different angles and zooms. Unfortunately, the transmission of such multi-shot videos will increase packet delay, jitter and other artifacts caused by frequent changes of the scenes. To some extent these problems may be reduced by controlled reduction of the quality of video so as to minimise uncontrolled corruption of the stream. Hence, there is a need for controlled streaming of a multi-shot e-learning video in response to the changing availability of the bandwidth, while utilising the available bandwidth to the maximum. The quality of transmitted video can be improved by removing the redundant background data and utilising the available bandwidth for sending high-resolution foreground information. While a number of schemes exist to identify and remove the background from the foreground, very few studies exist on the identification and separation of the two based on the understanding of the human visual system. Research has been carried out to define foreground and background in the context of e-learning video on the basis of human psychology. The results have been utilised to propose methods for improving the transmission of e-learning videos. In order to transmit the video sequence efficiently this research proposes the use of Feed- Forward Controllers that dynamically characterise the ongoing scene and adjust the streaming of video based on the availability of the bandwidth. In order to satisfy a number of receivers connected by varied bandwidth links in a heterogeneous environment, the use of Multi-Layer Feed-Forward Controller has been researched. This controller dynamically characterises the complexity (number of Macroblocks per frame) of the ongoing video sequence and combines it with the knowledge of availability of the bandwidth to various receivers to divide the video sequence into layers in an optimal way before transmitting it into network. The Single-layer Feed-Forward Controller inputs the complexity (Spatial Information and Temporal Information) of the on-going video sequence along with the availability of bandwidth to a receiver and adjusts the resolution and frame rate of individual scenes to transmit the sequence optimised to give the most acceptable perceptual quality within the bandwidth constraints. The performance of the Feed-Forward Controllers have been evaluated under simulated conditions and have been found to effectively regulate the streaming of real-time e-learning videos in order to provide perceptually improved video quality within the constraints of the available bandwidth

    Predictive Coding For Animation-Based Video Compression

    Full text link
    We address the problem of efficiently compressing video for conferencing-type applications. We build on recent approaches based on image animation, which can achieve good reconstruction quality at very low bitrate by representing face motions with a compact set of sparse keypoints. However, these methods encode video in a frame-by-frame fashion, i.e. each frame is reconstructed from a reference frame, which limits the reconstruction quality when the bandwidth is larger. Instead, we propose a predictive coding scheme which uses image animation as a predictor, and codes the residual with respect to the actual target frame. The residuals can be in turn coded in a predictive manner, thus removing efficiently temporal dependencies. Our experiments indicate a significant bitrate gain, in excess of 70% compared to the HEVC video standard and over 30% compared to VVC, on a datasetof talking-head videosComment: Accepted paper: ICIP 202

    Harnessing AI for Speech Reconstruction using Multi-view Silent Video Feed

    Full text link
    Speechreading or lipreading is the technique of understanding and getting phonetic features from a speaker's visual features such as movement of lips, face, teeth and tongue. It has a wide range of multimedia applications such as in surveillance, Internet telephony, and as an aid to a person with hearing impairments. However, most of the work in speechreading has been limited to text generation from silent videos. Recently, research has started venturing into generating (audio) speech from silent video sequences but there have been no developments thus far in dealing with divergent views and poses of a speaker. Thus although, we have multiple camera feeds for the speech of a user, but we have failed in using these multiple video feeds for dealing with the different poses. To this end, this paper presents the world's first ever multi-view speech reading and reconstruction system. This work encompasses the boundaries of multimedia research by putting forth a model which leverages silent video feeds from multiple cameras recording the same subject to generate intelligent speech for a speaker. Initial results confirm the usefulness of exploiting multiple camera views in building an efficient speech reading and reconstruction system. It further shows the optimal placement of cameras which would lead to the maximum intelligibility of speech. Next, it lays out various innovative applications for the proposed system focusing on its potential prodigious impact in not just security arena but in many other multimedia analytics problems.Comment: 2018 ACM Multimedia Conference (MM '18), October 22--26, 2018, Seoul, Republic of Kore

    Full Reference Video Quality Assessment for Machine Learning-Based Video Codecs

    Full text link
    Machine learning-based video codecs have made significant progress in the past few years. A critical area in the development of ML-based video codecs is an accurate evaluation metric that does not require an expensive and slow subjective test. We show that existing evaluation metrics that were designed and trained on DSP-based video codecs are not highly correlated to subjective opinion when used with ML video codecs due to the video artifacts being quite different between ML and video codecs. We provide a new dataset of ML video codec videos that have been accurately labeled for quality. We also propose a new full reference video quality assessment (FRVQA) model that achieves a Pearson Correlation Coefficient (PCC) of 0.99 and a Spearman's Rank Correlation Coefficient (SRCC) of 0.99 at the model level. We make the dataset and FRVQA model open source to help accelerate research in ML video codecs, and so that others can further improve the FRVQA model

    Gemino: Practical and Robust Neural Compression for Video Conferencing

    Full text link
    Video conferencing systems suffer from poor user experience when network conditions deteriorate because current video codecs simply cannot operate at extremely low bitrates. Recently, several neural alternatives have been proposed that reconstruct talking head videos at very low bitrates using sparse representations of each frame such as facial landmark information. However, these approaches produce poor reconstructions in scenarios with major movement or occlusions over the course of a call, and do not scale to higher resolutions. We design Gemino, a new neural compression system for video conferencing based on a novel high-frequency-conditional super-resolution pipeline. Gemino upsamples a very low-resolution version of each target frame while enhancing high-frequency details (e.g., skin texture, hair, etc.) based on information extracted from a single high-resolution reference image. We use a multi-scale architecture that runs different components of the model at different resolutions, allowing it to scale to resolutions comparable to 720p, and we personalize the model to learn specific details of each person, achieving much better fidelity at low bitrates. We implement Gemino atop aiortc, an open-source Python implementation of WebRTC, and show that it operates on 1024x1024 videos in real-time on a A100 GPU, and achieves 2.9x lower bitrate than traditional video codecs for the same perceptual quality.Comment: 12 pages, 6 appendi

    Bringing Telepresence to Every Desk

    Full text link
    In this paper, we work to bring telepresence to every desktop. Unlike commercial systems, personal 3D video conferencing systems must render high-quality videos while remaining financially and computationally viable for the average consumer. To this end, we introduce a capturing and rendering system that only requires 4 consumer-grade RGBD cameras and synthesizes high-quality free-viewpoint videos of users as well as their environments. Experimental results show that our system renders high-quality free-viewpoint videos without using object templates or heavy pre-processing. While not real-time, our system is fast and does not require per-video optimizations. Moreover, our system is robust to complex hand gestures and clothing, and it can generalize to new users. This work provides a strong basis for further optimization, and it will help bring telepresence to every desk in the near future. The code and dataset will be made available on our website https://mcmvmc.github.io/PersonalTelepresence/

    Instructional Message Design: Theory, Research, and Practice (Volume 2)

    Get PDF
    Message design is all around us, from the presentations we see in meetings and classes, to the instructions that come with our latest tech gadgets, to multi-million-dollar training simulations. In short, instructional message design is the real-world application of instructional and learning theories to design the tools and technologies used to communicate and effectively convey information. This field of study pulls from many applied sciences including cognitive psychology, industrial design, graphic design, instructional design, information technology, and human performance technology to name just a few. In this book we will visit several foundational theories that guide our research, look at different real-world applications, and begin to discuss directions for future best practice. For instance, cognitive load and multimedia learning theories provide best practice, virtual reality and simulations are only a few of the multitude of applications. Special needs learners and designing for online, e-learning, and web conferencing are only some of many applied areas where effective message design can improve outcomes. Studying effective instructional message design tools and techniques has and will continue to be a critical aspect of the overall instructional design process. Hopefully, this book will serve as an introduction to these topics and inspire your curiosity to explore further
    • …
    corecore