31 research outputs found

    Forward Error Correction for Fast Streaming with Open-Source Components

    Get PDF
    The mechanisms that provide streaming functionality are complex and far from perfect. Reliability in transmission depends upon the underlying protocols chosen for implementation. There are two main networking protocols for data transmission: TCP and UDP. TCP guarantees the arrival of data at the receiver, whereas UDP does not. Forward Error Correction is based on a technology called “erasure coding”, and can be used to mitigate data loss experienced when using UDP. This paper describes in detail the development of a video streaming library making use of the UDP transport protocol in order to test and further explore network based Forward Error Correction erasure codes

    PEA265: Perceptual Assessment of Video Compression Artifacts

    Full text link
    The most widely used video encoders share a common hybrid coding framework that includes block-based motion estimation/compensation and block-based transform coding. Despite their high coding efficiency, the encoded videos often exhibit visually annoying artifacts, denoted as Perceivable Encoding Artifacts (PEAs), which significantly degrade the visual Qualityof- Experience (QoE) of end users. To monitor and improve visual QoE, it is crucial to develop subjective and objective measures that can identify and quantify various types of PEAs. In this work, we make the first attempt to build a large-scale subjectlabelled database composed of H.265/HEVC compressed videos containing various PEAs. The database, namely the PEA265 database, includes 4 types of spatial PEAs (i.e. blurring, blocking, ringing and color bleeding) and 2 types of temporal PEAs (i.e. flickering and floating). Each containing at least 60,000 image or video patches with positive and negative labels. To objectively identify these PEAs, we train Convolutional Neural Networks (CNNs) using the PEA265 database. It appears that state-of-theart ResNeXt is capable of identifying each type of PEAs with high accuracy. Furthermore, we define PEA pattern and PEA intensity measures to quantify PEA levels of compressed video sequence. We believe that the PEA265 database and our findings will benefit the future development of video quality assessment methods and perceptually motivated video encoders.Comment: 10 pages,15 figures,4 table

    JNCD-based perceptual compression of RGB 4:4:4 image data

    Get PDF
    In contemporary lossy image coding applications, a desired aim is to decrease, as much as possible, bits per pixel without inducing perceptually conspicuous distortions in RGB image data. In this paper, we propose a novel color-based perceptual compression technique, named RGB-PAQ. RGB-PAQ is based on CIELAB Just Noticeable Color Difference (JNCD) and Human Visual System (HVS) spectral sensitivity. We utilize CIELAB JNCD and HVS spectral sensitivity modeling to separately adjust quantization levels at the Coding Block (CB) level. In essence, our method is designed to capitalize on the inability of the HVS to perceptually differentiate photons in very similar wavelength bands. In terms of application, the proposed technique can be used with RGB (4:4:4) image data of various bit depths and spatial resolutions including, for example, true color and deep color images in HD and Ultra HD resolutions. In the evaluations, we compare RGB-PAQ with a set of anchor methods; namely, HEVC, JPEG, JPEG 2000 and Google WebP. Compared with HEVC HM RExt, RGB-PAQ achieves up to 77.8% bits reductions. The subjective evaluations confirm that the compression artifacts induced by RGB-PAQ proved to be either imperceptible (MOS = 5) or near-imperceptible (MOS = 4) in the vast majority of cases

    A multimedia streaming system for urban rail environments

    Get PDF

    Can Video Conferencing Be as Easy as Telephoning?-A Home Healthcare Case Study

    Get PDF
    Copyright © 2016 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY).In comparison with almost universal adoption of telephony and mobile technologies in modern day healthcare, video conferencing has yet to become a ubiquitous clinical tool. Currently telehealth services are faced with a bewildering range of video conferencing software and hardware choices. This paper provides a case study in the selection of video conferencing services by the Flinders University Telehealth in the Home trial (FTH Trial) to support healthcare in the home. Using pragmatic methods, video conferencing solutions available on the market were assessed for usability, reliability, cost, compatibility, interoperability, performance and privacy considerations. The process of elimination through which the eventual solution was chosen, the selection criteria used for each requirement and the corresponding results are described. The resulting product set, although functional, had restricted ability to directly connect with systems used by healthcare providers elsewhere in the system. This outcome illustrates the impact on one small telehealth provider of the broader struggles between competing video conferencing vendors. At stake is the ability to communicate between healthcare organizations and provide public access to healthcare. Comparison of the current state of the video conferencing market place with the evolution of the telephony system reveals that video conferencing still has a long way to go before it can be considered as easy to use as the telephone. Health organizations that are concerned to improve access and quality of care should seek to influence greater standardization and interoperability though cooperation with one another, the private sector, international organizations and by encouraging governments to play a more active role in this sphere

    Performance analysis of VP8 image and video compression based on subjective evaluations

    Get PDF
    Today, several alternatives for compression of digital pictures and video sequences exist to choose from. Beside internationally recognized standard solutions, open access options like the VP8 image and video compression have recently appeared and are gaining popularity. In this paper, we present the methodology and the results of the rate-distortion performance analysis of VP8. The analysis is based on the results of subjective quality assessment experiments, which have been carried out to compare the two algorithms to a set of state of the art image and video compression standards

    Comparison of compression efficiency between HEVC/H.265 and VP9 based on subjective assessments

    Get PDF
    Current increasing effort of broadcast providers to transmit UHD (Ultra High Definition) content is likely to increase demand for ultra high definition televisions (UHDTVs). To compress UHDTV content, several alter- native encoding mechanisms exist. In addition to internationally recognized standards, open access proprietary options, such as VP9 video encoding scheme, have recently appeared and are gaining popularity. One of the main goals of these encoders is to efficiently compress video sequences beyond HDTV resolution for various scenarios, such as broadcasting or internet streaming. In this paper, a broadcast scenario rate-distortion performance analysis and mutual comparison of one of the latest video coding standards H.265/HEVC with recently released proprietary video coding scheme VP9 is presented. Also, currently one of the most popular and widely spread encoder H.264/AVC has been included into the evaluation to serve as a comparison baseline. The comparison is performed by means of subjective evaluations showing actual differences between encoding algorithms in terms of perceived quality. The results indicate a dominance of HEVC based encoding algorithm in comparison to other alternatives if a wide range of bit-rates from very low to high bit-rates corresponding to low quality up to transparent quality when compared to original and uncompressed video is considered. In addition, VP9 shows competitive results for synthetic content and bit-rates that correspond to operating points for transparent or close to transparent quality video

    Multi-destination beaming: apparently being in three places at once through robotic and virtual embodiment

    Get PDF
    It has been shown that an illusion of ownership over an artificial limb or even an entire body can be induced in people through multisensory stimulation, providing evidence that the surrogate body is the person’s actual body. Such body ownership illusions (BOIs) have been shown to occur with virtual bodies, mannequins, and humanoid robots. In this study, we show the possibility of eliciting a full-BOI over not one, but multiple artificial bodies concurrently. We demonstrate this by describing a system that allowed a participant to inhabit and fully control two different humanoid robots located in two distinct places and a virtual body in immersive virtual reality, using real-time full-body tracking and two-way audio communication, thereby giving them the illusion of ownership over each of them. We implemented this by allowing the participant be embodied in any one surrogate body at a given moment and letting them instantaneously switch between them. While the participant was embodied in one of the bodies, a proxy system would track the locations currently unoccupied and would control their remote representation in order to continue performing the tasks in those locations in a logical fashion. To test the efficacy of this system, an exploratory study was carried out with a fully functioning setup with three destinations and a simplified version of the proxy for use in a social interaction. The results indicate that the system was physically and psychologically comfortable and was rated highly by participants in terms of usability. Additionally, feelings of BOI and agency were reported, which were not influenced by the type of body representation. The results provide us with clues regarding BOI with humanoid robots of different dimensions, along with insight about self-localization and multilocation
    corecore