7 research outputs found

    Advances in Quality Assessment Of Video Streaming Systems: Algorithms, Methods, Tools

    Get PDF
    Quality assessment of video has matured significantly in the last 10 years due to a flurry of relevant developments in academia and industry, with relevant initiatives in VQEG, AOMedia, MPEG, ITU-T P.910, and other standardization and advisory bodies . Most advanced video streaming systems are now clearly moving away from good old-fashioned' PSNR and structural similarity type of assessment towards metrics that align better to mean opinion scores from viewers. Several of these algorithms, methods and tools have only been developed in the last 3-5 years and, while they are of significant interest to the research community, their advantages and limitations are not widely known in the research community. This tutorial provides this overview, but also focuses on practical aspects and how to design quality assessment tests that can scale to large datasets

    Deep perceptual preprocessing for video coding

    Get PDF
    We introduce the concept of rate-aware deep perceptual preprocessing (DPP) for video encoding. DPP makes a single pass over each input frame in order to enhance its visual quality when the video is to be compressed with any codec at any bitrate. The resulting bitstreams can be decoded and displayed at the client side without any post-processing component. DPP comprises a convolutional neural network that is trained via a composite set of loss functions that incorporates: (i) a perceptual loss based on a trained no-reference image quality assessment model, (ii) a reference-based fidelity loss expressing L1 and structural similarity aspects, (iii) a motion-based rate loss via block-based transform, quantization and entropy estimates that converts the essential components of standard hybrid video encoder designs into a trainable framework. Extensive testing using multiple quality metrics and AVC, AV1 and VVC encoders shows that DPP+encoder reduces, on average, the bitrate of the corresponding encoder by 11%. This marks the first time a server-side neural processing component achieves such savings over the state-of-the-art in video coding

    Bitstream-based video quality modeling and analysis of HTTP-based adaptive streaming

    Get PDF
    Die Verbreitung erschwinglicher Videoaufnahmetechnologie und verbesserte Internetbandbreiten ermöglichen das Streaming von hochwertigen Videos (Auflösungen > 1080p, Bildwiederholraten ≄ 60fps) online. HTTP-basiertes adaptives Streaming ist die bevorzugte Methode zum Streamen von Videos, bei der Videoparameter an die verfĂŒgbare Bandbreite angepasst wird, was sich auf die VideoqualitĂ€t auswirkt. Adaptives Streaming reduziert Videowiedergabeunterbrechnungen aufgrund geringer Netzwerkbandbreite, wirken sich jedoch auf die wahrgenommene QualitĂ€t aus, weswegen eine systematische Bewertung dieser notwendig ist. Diese Bewertung erfolgt ĂŒblicherweise fĂŒr kurze Abschnitte von wenige Sekunden und wĂ€hrend einer Sitzung (bis zu mehreren Minuten). Diese Arbeit untersucht beide Aspekte mithilfe perzeptiver und instrumenteller Methoden. Die perzeptive Bewertung der kurzfristigen VideoqualitĂ€t umfasst eine Reihe von Labortests, die in frei verfĂŒgbaren DatensĂ€tzen publiziert wurden. Die QualitĂ€t von lĂ€ngeren Sitzungen wurde in Labortests mit menschlichen Betrachtern bewertet, die reale Betrachtungsszenarien simulieren. Die Methodik wurde zusĂ€tzlich außerhalb des Labors fĂŒr die Bewertung der kurzfristigen VideoqualitĂ€t und der GesamtqualitĂ€t untersucht, um alternative AnsĂ€tze fĂŒr die perzeptive QualitĂ€tsbewertung zu erforschen. Die instrumentelle QualitĂ€tsevaluierung wurde anhand von bitstrom- und hybriden pixelbasierten VideoqualitĂ€tsmodellen durchgefĂŒhrt, die im Zuge dieser Arbeit entwickelt wurden. Dazu wurde die Modellreihe AVQBits entwickelt, die auf den Labortestergebnissen basieren. Es wurden vier verschiedene Modellvarianten von AVQBits mit verschiedenen Inputinformationen erstellt: Mode 3, Mode 1, Mode 0 und Hybrid Mode 0. Die Modellvarianten wurden untersucht und schneiden besser oder gleichwertig zu anderen aktuellen Modellen ab. Diese Modelle wurden auch auf 360°- und Gaming-Videos, HFR-Inhalte und Bilder angewendet. DarĂŒber hinaus wird ein Langzeitintegrationsmodell (1 - 5 Minuten) auf der Grundlage des ITU-T-P.1203.3-Modells prĂ€sentiert, das die verschiedenen Varianten von AVQBits mit sekĂŒndigen QualitĂ€tswerten als VideoqualitĂ€tskomponente des vorgeschlagenen Langzeitintegrationsmodells verwendet. Alle AVQBits-Varianten, das Langzeitintegrationsmodul und die perzeptiven Testdaten wurden frei zugĂ€nglich gemacht, um weitere Forschung zu ermöglichen.The pervasion of affordable capture technology and increased internet bandwidth allows high-quality videos (resolutions > 1080p, framerates ≄ 60fps) to be streamed online. HTTP-based adaptive streaming is the preferred method for streaming videos, adjusting video quality based on available bandwidth. Although adaptive streaming reduces the occurrences of video playout being stopped (called “stalling”) due to narrow network bandwidth, the automatic adaptation has an impact on the quality perceived by the user, which results in the need to systematically assess the perceived quality. Such an evaluation is usually done on a short-term (few seconds) and overall session basis (up to several minutes). In this thesis, both these aspects are assessed using subjective and instrumental methods. The subjective assessment of short-term video quality consists of a series of lab-based video quality tests that have resulted in publicly available datasets. The overall integral quality was subjectively assessed in lab tests with human viewers mimicking a real-life viewing scenario. In addition to the lab tests, the out-of-the-lab test method was investigated for both short-term video quality and overall session quality assessment to explore the possibility of alternative approaches for subjective quality assessment. The instrumental method of quality evaluation was addressed in terms of bitstream- and hybrid pixel-based video quality models developed as part of this thesis. For this, a family of models, namely AVQBits has been conceived using the results of the lab tests as ground truth. Based on the available input information, four different instances of AVQBits, that is, a Mode 3, a Mode 1, a Mode 0, and a Hybrid Mode 0 model are presented. The model instances have been evaluated and they perform better or on par with other state-of-the-art models. These models have further been applied to 360° and gaming videos, HFR content, and images. Also, a long-term integration (1 - 5 mins) model based on the ITU-T P.1203.3 model is presented. In this work, the different instances of AVQBits with the per-1-sec scores output are employed as the video quality component of the proposed long-term integration model. All AVQBits variants as well as the long-term integration module and the subjective test data are made publicly available for further research

    A Handheld Dual Particle Imager for Imaging and Characterizing Special Nuclear Material

    Full text link
    Localizing and monitoring sources of radiation is a difficult problem for national security and nuclear safeguard applications. Terrorists attempting to accumulate nuclear materials will most likely hide them in remote or entrenched locations where large or heavy detector systems cannot practically be deployed. Inspectors attempting to monitor or verify the location of a source may also be constrained in the size and weight of a detector they can deploy in a facility. Current deployed detectors typically measure relative count or dose rates to localize or verify a source; this approach requires the user to go near a source to begin to determine its location. These detectors can be subject to fluctuations in background radiation due to natural fluctuations, the surrounding environment, or potential shielding around a source that may cause it to not follow a 1/r^2 trend. Imaging systems such as neutron scatter cameras (NSCs) or Compton cameras can be used to localize a source much faster than current simplistic systems because they are not subject to background in the same manner. However, these imaging systems are prohibitively large or heavy due to cooling systems, battery systems or multiple photomultiplier tubes (PMTs). Using silicon photomultipliers (SiPMs) in place of PMTs, however, can greatly reduce the size of a scintillator-based imaging system since SiPMs are much more compact and are comparable photodetectors. A handheld dual-particle imager (H2DPI) composed of twelve stilbene pillars (6x6x50.5 mm^3) and eight CeBr3 cylinders coupled to J-Series SensL SiPMs has been designed and built. The H2DPI localizes neutron sources by reconstructing double-scatter neutron events between stilbene pillars and localizes gamma-ray sources by reconstructing gamma-ray double-scatter events between stilbene pillars and CeBr3 cylinders. This Compton imaging methodology has been shown to be able to localize and identify special nuclear material sources such as highly enriched uranium and plutonium when both sources are in the same field of view. The neutron imaging capability also enables for neutron spectroscopy. The H2DPI is capable of isolating and identifying the (alpha,n) spectrum emitted by a PuBe source from the Watt spectrum emitted by a Cf-252 source. Characterization that was used to accurately reconstruct sources was also applied to a simulated model of fully-realized system composed of 64-pillars of stilbene coupled to J-series SensL SiPMs. Simulations were validated and it was found that the intrinsic neutron efficiency of a fully-realized system would be 0.94-1.25 % depending on the orientation of the stilbene crystals.PHDNuclear Engineering & Radiological SciencesUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169691/1/wmst_1.pd
    corecore