69 research outputs found

    Description-driven Adaptation of Media Resources

    Get PDF
    The current multimedia landscape is characterized by a significant diversity in terms of available media formats, network technologies, and device properties. This heterogeneity has resulted in a number of new challenges, such as providing universal access to multimedia content. A solution for this diversity is the use of scalable bit streams, as well as the deployment of a complementary system that is capable of adapting scalable bit streams to the constraints imposed by a particular usage environment (e.g., the limited screen resolution of a mobile device). This dissertation investigates the use of an XML-driven (Extensible Markup Language) framework for the format-independent adaptation of scalable bit streams. Using this approach, the structure of a bit stream is first translated into an XML description. In a next step, the resulting XML description is transformed to reflect a desired adaptation of the bit stream. Finally, the transformed XML description is used to create an adapted bit stream that is suited for playback in the targeted usage environment. The main contribution of this dissertation is BFlavor, a new tool for exposing the syntax of binary media resources as an XML description. Its development was inspired by two other technologies, i.e. MPEG-21 BSDL (Bitstream Syntax Description Language) and XFlavor (Formal Language for Audio-Visual Object Representation, extended with XML features). Although created from a different point of view, both languages offer solutions for translating the syntax of a media resource into an XML representation for further processing. BFlavor (BSDL+XFlavor) harmonizes the two technologies by combining their strengths and eliminating their weaknesses. The expressive power and performance of a BFlavor-based content adaptation chain, compared to tool chains entirely based on either BSDL or XFlavor, were investigated by several experiments. One series of experiments targeted the exploitation of multi-layered temporal scalability in H.264/AVC, paying particular attention to the use of sub-sequences and hierarchical coding patterns, as well as to the use of metadata messages to communicate the bit stream structure to the adaptation logic. BFlavor was the only tool to offer an elegant and practical solution for XML-driven adaptation of H.264/AVC bit streams in the temporal domain

    Machine Learning for Multimedia Communications

    Get PDF
    Machine learning is revolutionizing the way multimedia information is processed and transmitted to users. After intensive and powerful training, some impressive efficiency/accuracy improvements have been made all over the transmission pipeline. For example, the high model capacity of the learning-based architectures enables us to accurately model the image and video behavior such that tremendous compression gains can be achieved. Similarly, error concealment, streaming strategy or even user perception modeling have widely benefited from the recent learningoriented developments. However, learning-based algorithms often imply drastic changes to the way data are represented or consumed, meaning that the overall pipeline can be affected even though a subpart of it is optimized. In this paper, we review the recent major advances that have been proposed all across the transmission chain, and we discuss their potential impact and the research challenges that they raise

    Content-Aware Multimedia Communications

    Get PDF
    The demands for fast, economic and reliable dissemination of multimedia information are steadily growing within our society. While people and economy increasingly rely on communication technologies, engineers still struggle with their growing complexity. Complexity in multimedia communication originates from several sources. The most prominent is the unreliability of packet networks like the Internet. Recent advances in scheduling and error control mechanisms for streaming protocols have shown that the quality and robustness of multimedia delivery can be improved significantly when protocols are aware of the content they deliver. However, the proposed mechanisms require close cooperation between transport systems and application layers which increases the overall system complexity. Current approaches also require expensive metrics and focus on special encoding formats only. A general and efficient model is missing so far. This thesis presents efficient and format-independent solutions to support cross-layer coordination in system architectures. In particular, the first contribution of this work is a generic dependency model that enables transport layers to access content-specific properties of media streams, such as dependencies between data units and their importance. The second contribution is the design of a programming model for streaming communication and its implementation as a middleware architecture. The programming model hides the complexity of protocol stacks behind simple programming abstractions, but exposes cross-layer control and monitoring options to application programmers. For example, our interfaces allow programmers to choose appropriate failure semantics at design time while they can refine error protection and visibility of low-level errors at run-time. Based on some examples we show how our middleware simplifies the integration of stream-based communication into large-scale application architectures. An important result of this work is that despite cross-layer cooperation, neither application nor transport protocol designers experience an increase in complexity. Application programmers can even reuse existing streaming protocols which effectively increases system robustness.Der Bedarf unsere Gesellschaft nach kostengünstiger und zuverlässiger Kommunikation wächst stetig. Während wir uns selbst immer mehr von modernen Kommunikationstechnologien abhängig machen, müssen die Ingenieure dieser Technologien sowohl den Bedarf nach schneller Einführung neuer Produkte befriedigen als auch die wachsende Komplexität der Systeme beherrschen. Gerade die Übertragung multimedialer Inhalte wie Video und Audiodaten ist nicht trivial. Einer der prominentesten Gründe dafür ist die Unzuverlässigkeit heutiger Netzwerke, wie z.B.~dem Internet. Paketverluste und schwankende Laufzeiten können die Darstellungsqualität massiv beeinträchtigen. Wie jüngste Entwicklungen im Bereich der Streaming-Protokolle zeigen, sind jedoch Qualität und Robustheit der Übertragung effizient kontrollierbar, wenn Streamingprotokolle Informationen über den Inhalt der transportierten Daten ausnutzen. Existierende Ansätze, die den Inhalt von Multimediadatenströmen beschreiben, sind allerdings meist auf einzelne Kompressionsverfahren spezialisiert und verwenden berechnungsintensive Metriken. Das reduziert ihren praktischen Nutzen deutlich. Außerdem erfordert der Informationsaustausch eine enge Kooperation zwischen Applikationen und Transportschichten. Da allerdings die Schnittstellen aktueller Systemarchitekturen nicht darauf vorbereitet sind, müssen entweder die Schnittstellen erweitert oder alternative Architekturkonzepte geschaffen werden. Die Gefahr beider Varianten ist jedoch, dass sich die Komplexität eines Systems dadurch weiter erhöhen kann. Das zentrale Ziel dieser Dissertation ist es deshalb, schichtenübergreifende Koordination bei gleichzeitiger Reduzierung der Komplexität zu erreichen. Hier leistet die Arbeit zwei Beträge zum aktuellen Stand der Forschung. Erstens definiert sie ein universelles Modell zur Beschreibung von Inhaltsattributen, wie Wichtigkeiten und Abhängigkeitsbeziehungen innerhalb eines Datenstroms. Transportschichten können dieses Wissen zur effizienten Fehlerkontrolle verwenden. Zweitens beschreibt die Arbeit das Noja Programmiermodell für multimediale Middleware. Noja definiert Abstraktionen zur Übertragung und Kontrolle multimedialer Ströme, die die Koordination von Streamingprotokollen mit Applikationen ermöglichen. Zum Beispiel können Programmierer geeignete Fehlersemantiken und Kommunikationstopologien auswählen und den konkreten Fehlerschutz dann zur Laufzeit verfeinern und kontrolliere

    Light field coding with field of view scalability and exemplar-based inter-layer prediction

    Get PDF
    Light field imaging based on microlens arrays—a.k.a. holoscopic, plenoptic, and integral imaging—has currently risen up as a feasible and prospective technology for future image and video applications. However, deploying actual light field applications will require identifying more powerful representations and coding solutions that support arising new manipulation and interaction functionalities. In this context, this paper proposes a novel scalable coding solution that supports a new type of scalability, referred to as field-of-view scalability. The proposed scalable coding solution comprises a base layer compliant with the High Efficiency Video Coding (HEVC) standard, complemented by one or more enhancement layers that progressively allow richer versions of the same light field content in terms of content manipulation and interaction possibilities. In addition, to achieve high-compression performance in the enhancement layers, novel exemplar-based interlayer coding tools are also proposed, namely: 1) a direct prediction based on exemplar texture samples from lower layers and 2) an interlayer compensated prediction using a reference picture that is built relying on an exemplar-based algorithm for texture synthesis. Experimental results demonstrate the advantages of the proposed scalable coding solution to cater to users with different preferences/requirements in terms of interaction functionalities, while providing better rate- distortion performance (independently of the optical setup used for acquisition) compared to HEVC and other scalable light field coding solutions in the literature.info:eu-repo/semantics/acceptedVersio

    Machine Learning for Multimedia Communications

    Get PDF
    Machine learning is revolutionizing the way multimedia information is processed and transmitted to users. After intensive and powerful training, some impressive efficiency/accuracy improvements have been made all over the transmission pipeline. For example, the high model capacity of the learning-based architectures enables us to accurately model the image and video behavior such that tremendous compression gains can be achieved. Similarly, error concealment, streaming strategy or even user perception modeling have widely benefited from the recent learning-oriented developments. However, learning-based algorithms often imply drastic changes to the way data are represented or consumed, meaning that the overall pipeline can be affected even though a subpart of it is optimized. In this paper, we review the recent major advances that have been proposed all across the transmission chain, and we discuss their potential impact and the research challenges that they raise

    Coded Wireless Video Broadcast/Multicast

    Get PDF
    Advancements in video coding, compact media display, and communication devices, particularly in emerging broadband wireless access networks, have created many foreseeable and exciting applications of video broadcast/multicast over the wireless meidum. For efficient and robust wireless video broadcast/multicast under fading, this thesis presents and examines a novel cross-layer framework that exploits the interplay between applying protections on a successively refinable video source and transmitting through a layered broadcast/multicast channel. The framework is realistically achieved and evaluated by using multiple description coding (MDC) on a scalable video source and using superposition coding (SPC) for layered broadcast/multicast transmissions. An analytical model using the total received/recovered video bitstreams from each coded wireless broadcast/multicast signal is developed, which serves as a metric of video quality for the system analysis and optimization. An efficient methodology has demonstrated that optimal power allocations and modulation selections can be practically determined to improve the broadcast/multicast video quality. From the information-theoretical perspective, a general closed-form formula is derived for the end-to-end distortion analysis of the proposed framework, which is applicable to any (n, k) protection code applied on a successive refinable source with a Gaussian distribution over layered Gaussian broadcast channels. The results reveal the scenarios for the proposed framework to lead to a lower distortion than a legacy system without any protection. By analyzing the characteristics of the closed-form formula, an efficient O(n log n) algorithm is developed to determine optimal k values in the (n, k) protection codes that minimize the distortion under the framework. Finally, a cross-layer design of logical SPC modulation is introduced to achieve layered broadcast/multicast for scalable video. It serves as an alternative for practically implementing the proposed framework of coded wireless video broadcast/multicast, if the hardware-based SPC component is not available in a wireless system. In summary, the thesis presents comprehensive analyses, simulations, and experiments to understand, investigate, and justify the effectiveness of the proposed cross-layer framework of coded wireless video broadcast/multicast. More importantly, this thesis contributes to the advancement in the related fields of communication engineering and information theory by introducing a new design dimension in terms of protection. This is unique when compared to previously-reported layered approaches that are often manipulating conventional parameters alone such as power and modulation scheme. The impact of this dimension was unapparent in the past, but is now proven as an effective means to enable high-quality, efficient, and robust wireless video broadcast/multicast for promising media applications

    Analysis and Comparison of Modern Video Compression Standards for Random-access Light-field Compression

    Get PDF
    Light-field (LF) 3D displays are anticipated to be the next-generation 3D displays by providing smooth motion parallax, wide field of view (FOV), and higher depth range than the current autostereoscopic displays. The projection-based multi-view LF 3D displays bring the desired new functionalities through a set of projection engines creating light sources for the continuous light field to be created. Such displays require a high number of perspective views as an input to fully exploit the visualization capabilities and viewing angle provided by the LF technology. Delivering, processing and de/compressing this amount of views pose big technical challenges. However, when processing light fields in a distributed system, access patterns in ray space are quite regular, some processing nodes do not need all views, moreover the necessary views are used only partially. This trait could be exploited by partial decoding of pictures to help providing less complex and thus real-time operation. However, none of the recent video coding standards (e.g., Advanced Video Coding (AVC)/H.264 and High Efficiency Video Coding (HEVC)/H.265 standards) provides partial decoding of video pictures. Such feature can be achieved by partitioning video pictures into partitions that can be processed independently at the cost of lowering the compression efficiency. Examples of such partitioning features introduced by the modern video coding standards include slices and tiles, which enable random access into the video bitstreams with a specific granularity. In addition, some extra requirements have to be imposed on the standard partitioning tools in order to be applicable in the context of partial decoding. This leads to partitions called self-contained which refers to isolated or independently decodable regions in the video pictures. This work studies the problem of creating self-contained partitions in the conventional AVC/H.264 and HEVC/H.265 standards, and HEVC 3D extensions including multi-view (i.e., MV-HEVC) and 3D (i.e., 3D-HEVC) extensions using slices and tiles, respectively. The requirements that need to be fulfilled in order to build self-contained partitions are described, and an encoder-side solution is proposed. Further, the work examines how slicing/tiling can be used to facilitate random access into the video bitstreams, how the number of slices/tiles affects the compression ratio considering different prediction structures, and how much effect partial decoding has on decoding time. Overall, the experimental results indicate that the finer the partitioning is, the higher the compression loss occurs. The usage of self-contained partitions makes the decoding operation very efficient and less complex
    • …
    corecore