13 research outputs found

    Visual Content Characterization Based on Encoding Rate-Distortion Analysis

    Get PDF
    Visual content characterization is a fundamentally important but under exploited step in dataset construction, which is essential in solving many image processing and computer vision problems. In the era of machine learning, this has become ever more important, because with the explosion of image and video content nowadays, scrutinizing all potential content is impossible and source content selection has become increasingly difficult. In particular, in the area of image/video coding and quality assessment, it is highly desirable to characterize/select source content and subsequently construct image/video datasets that demonstrate strong representativeness and diversity of the visual world, such that the visual coding and quality assessment methods developed from and validated using such datasets exhibit strong generalizability. Encoding Rate-Distortion (RD) analysis is essential for many multimedia applications. Examples of applications that explicitly use RD analysis include image encoder RD optimization, video quality assessment (VQA), and Quality of Experience (QoE) optimization of streaming videos etc. However, encoding RD analysis has not been well investigated in the context of visual content characterization. This thesis focuses on applying encoding RD analysis as a visual source content characterization method with image/video coding and quality assessment applications in mind. We first conduct a video quality subjective evaluation experiment for state-of-the-art video encoder performance analysis and comparison, where our observations reveal severe problems that motivate the needs of better source content characterization and selection methods. Then the effectiveness of RD analysis in visual source content characterization is demonstrated through a proposed quality control mechanism for video coding by eigen analysis in the space of General Quality Parameter (GQP) functions. Finally, by combining encoding RD analysis with submodular set function optimization, we propose a novel method for automating the process of representative source content selection, which helps boost the RD performance of visual encoders trained with the selected visual contents

    Image and Video Coding Techniques for Ultra-low Latency

    Get PDF
    The next generation of wireless networks fosters the adoption of latency-critical applications such as XR, connected industry, or autonomous driving. This survey gathers implementation aspects of different image and video coding schemes and discusses their tradeoffs. Standardized video coding technologies such as HEVC or VVC provide a high compression ratio, but their enormous complexity sets the scene for alternative approaches like still image, mezzanine, or texture compression in scenarios with tight resource or latency constraints. Regardless of the coding scheme, we found inter-device memory transfers and the lack of sub-frame coding as limitations of current full-system and software-programmable implementations.publishedVersionPeer reviewe

    User generated HDR gaming video streaming : dataset, codec comparison and challenges

    Get PDF
    Gaming video streaming services have grown tremendously in the past few years, with higher resolutions, higher frame rates and HDR gaming videos getting increasingly adopted among the gaming community. Since gaming content as such is different from non-gaming content, it is imperative to evaluate the performance of the existing encoders to help understand the bandwidth requirements of such services, as well as further improve the compression efficiency of such encoders. Towards this end, we present in this paper GamingHDRVideoSET, a dataset consisting of eighteen 10-bit UHD-HDR gaming videos and encoded video sequences using four different codecs, together with their objective evaluation results. The dataset is available online at [to be added after paper acceptance]. Additionally, the paper discusses the codec compression efficiency of most widely used practical encoders, i.e., x264 (H.264/AVC), x265 (H.265/HEVC) and libvpx (VP9), as well the recently proposed encoder libaom (AV1), on 10-bit, UHD-HDR content gaming content. Our results show that the latest compression standard AV1 results in the best compression efficiency, followed by HEVC, H.264, and VP9.Comment: 14 pages, 8 figures, submitted to IEEE journa

    Big Data for Traffic Engineering in Software-Defined Networks

    Get PDF
    Software-defined networking overcomes the limitations of traditional networks by splitting the control plane from the data plane. The logic of the network is moved to a component called the controller that manages devices in the data plane. To implement this architecture, it has become the norm to use the OpenFlow (OF) protocol, which defines several counters maintained by network devices. These counters are the starting point for Traffic Engineering (TE) activities. TE monitors several network parameters, including network bandwidth utilization. A great challenge for TE is to collect and generate statistics about bandwidth utilization for monitoring and traffic analysis activities. This becomes even more challenging if fine-grained monitoring is required. Network management tasks such as network provisioning, capacity planning, load balancing, and anomaly detection can benefit from this fine-grained monitoring. Because the counters are updated for every packet that crosses the switch, they must be retrieved in a streaming fashion. This scenario suggests the use of Big Data streaming techniques to collect and process counter values. Therefore, this paper proposes an approach based on a fine-grained Big Data monitoring method to collect and generate traffic statistics using counter values. This research work can significantly leverage TE. The approach can provide a more detailed view of network resource utilization because it can deliver individual and aggregated statistical analyses of bandwidth consumption. Experimental results show the effectiveness of the proposed method

    Datasheet for subjective and objective quality assessment datasets

    Get PDF
    Over the years, many subjective and objective quality assessment datasets have been created and made available to the research community. However, there is no standard process for documenting the various aspects of the dataset, such as details about the source sequences, number of test subjects, test methodology, encoding settings, etc. Such information is often of great importance to the users of the dataset as it can help them get a quick understanding of the motivation and scope of the dataset. Without such a template, it is left to each reader to collate the information from the relevant publication or website, which is a tedious and time-consuming process. In some cases, the absence of a template to guide the documentation process can result in an unintentional omission of some important information. This paper addresses this simple but significant gap by proposing a datasheet template for documenting various aspects of sub-jective and objective quality assessment datasets for multimedia data. The contributions presented in this work aim to simplify the documentation process for existing and new datasets and improve their reproducibility. The proposed datasheet template is available on GitHub1, along with a few sample datasheets of a few open-source audiovisual subjective and objective datasets

    Generalized Rate-Distortion Functions of Videos

    Get PDF
    Customers are consuming enormous digital videos every day via various kinds of video services through terrestrial, cable, and satellite communication systems or over-the-top Internet connections. To offer the best possible services using the limited capacity of video distribution systems, these video services desire precise understanding of the relationship between the perceptual quality of a video and its media attributes, for which we term it the GRD function. In this thesis, we focus on accurately estimating the generalized rate-distortion (GRD) function with a minimal number of measurement queries. We first explore the GRD behavior of compressed digital videos in a two-dimensional space of bitrate and resolution. Our analysis on real-world GRD data reveals that all GRD functions share similar regularities, but meanwhile exhibit considerable variations across different combinations of content and encoder types. Based on the analysis, we define the theoretical space of the GRD function, which not only constructs the groundwork of the form a GRD model should take, but also determines the constraints these functions must satisfy. We propose two computational GRD models. In the first model, we assume that the quality scores are precise, and develop a robust axial-monotonic Clough-Tocher (RAMCT) interpolation method to approximate the GRD function from a moderate number of measurements. In the second model, we show that the GRD function space is a convex set residing in a Hilbert space, and that a GRD function can be estimated by solving a projection problem onto the convex set. By analyzing GRD functions that arise in practice, we approximate the infinite-dimensional theoretical space by a low-dimensional one, based on which an empirical GRD model of few parameters is proposed. To further reduce the number of queries, we present a novel sampling scheme based on a probabilistic model and an information measure. The proposed sampling method generates a sequence of queries by minimizing the overall informativeness of the remaining samples. To evaluate the performance of the GRD estimation methods, we collect a large-scale database consisting of more than 4,0004,000 real-world GRD functions, namely the Waterloo generalized rate-distortion (Waterloo GRD) database. Extensive comparison experiments are carried out on the database. Superiority of the two proposed GRD models over state-of-the-art approaches are attested both quantitatively and visually. Meanwhile, it is also validated that the proposed sampling algorithm consistently reduces the number of queries needed by various GRD estimation algorithms. Finally, we show the broad application scope of the proposed GRD models by exemplifying three applications: rate-distortion curve prediction, per-title encoding profile generation, and video encoder comparison

    Plataforma de ingest de apoio à curadoria do cinema Português em ambiente escolar

    Get PDF
    A evolução das infraestruturas e das tecnologias que suportam a internet veio facilitar a distribuição de conteúdos cinematográficos por esta via ao longo dos últimos anos. No entanto, apesar do consumo de video online ser cada ver maior, não existe ainda a nível nacional uma plataforma de distribuição de conteúdos orientada para o público em ambiente escolar, que envolva produtores, arquivos de cinema e a comunidade educativa. Falta uma cadeia integrada que englobe as fases de restauro, preservação, digitalização, ingest, curadoria e distribuição de obras cinematográficas num ambiente escolar e educativo. Esta dissertação, realizada em contexto profissional, visou desenvolver uma plataforma de ingest e processamento de conteúdos cinematográficos, preparando-os adequadamente para a sua disseminação em contexto educativo. Primeiramente, tendo por base a arquitectura da cadeia supra-mencionada, foi feito um estudo da natureza dos diferentes inputs que o sistema poderia receber na sua entrada (estrutura dos ficheiros, parsing de metadata, e análise de formatos de audio e video), e dos formatos standard mais adequados para armazenamento e distribuição dos conteúdos. Posteriormente foi desenvolvida uma prova de conceito, recorrendo a ferramentas externas de encoding de audio e video e de conversão de legendas. Por fim procedeu-se à implementação da arquitectura definida num sistema de ingest profissional, na empresa onde a dissertação foi realizada

    Bitstream-based video quality modeling and analysis of HTTP-based adaptive streaming

    Get PDF
    Die Verbreitung erschwinglicher Videoaufnahmetechnologie und verbesserte Internetbandbreiten ermöglichen das Streaming von hochwertigen Videos (Auflösungen > 1080p, Bildwiederholraten ≥ 60fps) online. HTTP-basiertes adaptives Streaming ist die bevorzugte Methode zum Streamen von Videos, bei der Videoparameter an die verfügbare Bandbreite angepasst wird, was sich auf die Videoqualität auswirkt. Adaptives Streaming reduziert Videowiedergabeunterbrechnungen aufgrund geringer Netzwerkbandbreite, wirken sich jedoch auf die wahrgenommene Qualität aus, weswegen eine systematische Bewertung dieser notwendig ist. Diese Bewertung erfolgt üblicherweise für kurze Abschnitte von wenige Sekunden und während einer Sitzung (bis zu mehreren Minuten). Diese Arbeit untersucht beide Aspekte mithilfe perzeptiver und instrumenteller Methoden. Die perzeptive Bewertung der kurzfristigen Videoqualität umfasst eine Reihe von Labortests, die in frei verfügbaren Datensätzen publiziert wurden. Die Qualität von längeren Sitzungen wurde in Labortests mit menschlichen Betrachtern bewertet, die reale Betrachtungsszenarien simulieren. Die Methodik wurde zusätzlich außerhalb des Labors für die Bewertung der kurzfristigen Videoqualität und der Gesamtqualität untersucht, um alternative Ansätze für die perzeptive Qualitätsbewertung zu erforschen. Die instrumentelle Qualitätsevaluierung wurde anhand von bitstrom- und hybriden pixelbasierten Videoqualitätsmodellen durchgeführt, die im Zuge dieser Arbeit entwickelt wurden. Dazu wurde die Modellreihe AVQBits entwickelt, die auf den Labortestergebnissen basieren. Es wurden vier verschiedene Modellvarianten von AVQBits mit verschiedenen Inputinformationen erstellt: Mode 3, Mode 1, Mode 0 und Hybrid Mode 0. Die Modellvarianten wurden untersucht und schneiden besser oder gleichwertig zu anderen aktuellen Modellen ab. Diese Modelle wurden auch auf 360°- und Gaming-Videos, HFR-Inhalte und Bilder angewendet. Darüber hinaus wird ein Langzeitintegrationsmodell (1 - 5 Minuten) auf der Grundlage des ITU-T-P.1203.3-Modells präsentiert, das die verschiedenen Varianten von AVQBits mit sekündigen Qualitätswerten als Videoqualitätskomponente des vorgeschlagenen Langzeitintegrationsmodells verwendet. Alle AVQBits-Varianten, das Langzeitintegrationsmodul und die perzeptiven Testdaten wurden frei zugänglich gemacht, um weitere Forschung zu ermöglichen.The pervasion of affordable capture technology and increased internet bandwidth allows high-quality videos (resolutions > 1080p, framerates ≥ 60fps) to be streamed online. HTTP-based adaptive streaming is the preferred method for streaming videos, adjusting video quality based on available bandwidth. Although adaptive streaming reduces the occurrences of video playout being stopped (called “stalling”) due to narrow network bandwidth, the automatic adaptation has an impact on the quality perceived by the user, which results in the need to systematically assess the perceived quality. Such an evaluation is usually done on a short-term (few seconds) and overall session basis (up to several minutes). In this thesis, both these aspects are assessed using subjective and instrumental methods. The subjective assessment of short-term video quality consists of a series of lab-based video quality tests that have resulted in publicly available datasets. The overall integral quality was subjectively assessed in lab tests with human viewers mimicking a real-life viewing scenario. In addition to the lab tests, the out-of-the-lab test method was investigated for both short-term video quality and overall session quality assessment to explore the possibility of alternative approaches for subjective quality assessment. The instrumental method of quality evaluation was addressed in terms of bitstream- and hybrid pixel-based video quality models developed as part of this thesis. For this, a family of models, namely AVQBits has been conceived using the results of the lab tests as ground truth. Based on the available input information, four different instances of AVQBits, that is, a Mode 3, a Mode 1, a Mode 0, and a Hybrid Mode 0 model are presented. The model instances have been evaluated and they perform better or on par with other state-of-the-art models. These models have further been applied to 360° and gaming videos, HFR content, and images. Also, a long-term integration (1 - 5 mins) model based on the ITU-T P.1203.3 model is presented. In this work, the different instances of AVQBits with the per-1-sec scores output are employed as the video quality component of the proposed long-term integration model. All AVQBits variants as well as the long-term integration module and the subjective test data are made publicly available for further research

    Marine Viruses 2016

    Get PDF
    The research effort, publication rate and scientific community within the field of marine viruses have been growing rapidly over the past decade and viruses are now known to play key roles in microbial population dynamics, diversity and evolution as well as biogeochemical cycling. The compilation of papers included in the current Special Issue highlights the exploration of eukaryotic and prokaryotic viruses, from discovery to complex interplays between virus and host and virus–host interactions with ecologically relevant environmental variables. The discovery of novel viruses and new mechanisms underlying virus distribution and diversity exemplify the fascinating world of marine viruses. The oceans greatly shape Earth’s climate, hold 1.37 billion km3 of seawater, produce half of the oxygen in the atmosphere, and are integral to all known life. In a time where life in the oceans is under increasing threat (global warming, pollution, economic use) it is pressing to understand how viruses affect host population dynamics, biodiversity, biogeochemical cycling and ecosystem efficiency
    corecore