366 research outputs found

    MIJ2K: Enhanced video transmission based on conditional replenishment of JPEG2000 tiles with motion compensation

    Get PDF
    A video compressed as a sequence of JPEG2000 images can achieve the scalability, flexibility, and accessibility that is lacking in current predictive motion-compensated video coding standards. However, streaming JPEG2000-based sequences would consume considerably more bandwidth. With the aim of solving this problem, this paper describes a new patent pending method, called MIJ2K. MIJ2K reduces the inter-frame redundancy present in common JPEG2000 sequences (also called MJP2). We apply a real-time motion detection system to perform conditional tile replenishment. This will significantly reduce the bit rate necessary to transmit JPEG2000 video sequences, also improving their quality. The MIJ2K technique can be used both to improve JPEG2000-based real-time video streaming services or as a new codec for video storage. MIJ2K relies on a fast motion compensation technique, especially designed for real-time video streaming purposes. In particular, we propose transmitting only the tiles that change in each JPEG2000 frame. This paper describes and evaluates the method proposed for real-time tile change detection, as well as the overall MIJ2K architecture. We compare MIJ2K against other intra-frame codecs, like standard Motion JPEG2000, Motion JPEG, and the latest H.264-Intra, comparing performance in terms of compression ratio and video quality, measured by standard peak signal-to-noise ratio, structural similarity and visual quality metric metrics.This work was supported in part by Projects CICYT TIN2008– 06742-C02–02/TSI, CICYT TEC2008–06732-C02–02/TEC, SINPROB, CAM MADRINET S-0505/TIC/0255 and DPS2008–07029-C02–02.Publicad

    A Survey on Multimedia-Based Cross-Layer Optimization in Visual Sensor Networks

    Get PDF
    Visual sensor networks (VSNs) comprised of battery-operated electronic devices endowed with low-resolution cameras have expanded the applicability of a series of monitoring applications. Those types of sensors are interconnected by ad hoc error-prone wireless links, imposing stringent restrictions on available bandwidth, end-to-end delay and packet error rates. In such context, multimedia coding is required for data compression and error-resilience, also ensuring energy preservation over the path(s) toward the sink and improving the end-to-end perceptual quality of the received media. Cross-layer optimization may enhance the expected efficiency of VSNs applications, disrupting the conventional information flow of the protocol layers. When the inner characteristics of the multimedia coding techniques are exploited by cross-layer protocols and architectures, higher efficiency may be obtained in visual sensor networks. This paper surveys recent research on multimedia-based cross-layer optimization, presenting the proposed strategies and mechanisms for transmission rate adjustment, congestion control, multipath selection, energy preservation and error recovery. We note that many multimedia-based cross-layer optimization solutions have been proposed in recent years, each one bringing a wealth of contributions to visual sensor networks

    Video traffic modeling and delivery

    Get PDF
    Video is becoming a major component of the network traffic, and thus there has been a great interest to model video traffic. It is known that video traffic possesses short range dependence (SRD) and long range dependence (LRD) properties, which can drastically affect network performance. By decomposing a video sequence into three parts, according to its motion activity, Markov-modulated self-similar process model is first proposed to capture autocorrelation function (ACF) characteristics of MPEG video traffic. Furthermore, generalized Beta distribution is proposed to model the probability density functions (PDFs) of MPEG video traffic. It is observed that the ACF of MPEG video traffic fluctuates around three envelopes, reflecting the fact that different coding methods reduce the data dependency by different amount. This observation has led to a more accurate model, structurally modulated self-similar process model, which captures the ACF of the traffic, both SRD and LRD, by exploiting the MPEG structure. This model is subsequently simplified by simply modulating three self-similar processes, resulting in a much simpler model having the same accuracy as the structurally modulated self-similar process model. To justify the validity of the proposed models for video transmission, the cell loss ratios (CLRs) of a server with a limited buffer size driven by the empirical trace are compared to those driven by the proposed models. The differences are within one order, which are hardly achievable by other models, even for the case of JPEG video traffic. In the second part of this dissertation, two dynamic bandwidth allocation algorithms are proposed for pre-recorded and real-time video delivery, respectively. One is based on scene change identification, and the other is based on frame differences. The proposed algorithms can increase the bandwidth utilization by a factor of two to five, as compared to the constant bit rate (CBR) service using peak rate assignment

    Resource-Constrained Low-Complexity Video Coding for Wireless Transmission

    Get PDF

    MIJ2K Optimization using evolutionary multiobjective optimization algorithms

    Get PDF
    This paper deals with the multiobjective definition of video compression and its optimization. The optimization will be done using NSGA-II, a well-tested and highly accurate algorithm with a high convergence speed developed for solving multiobjective problems. Video compression is defined as a problem including two competing objectives. We try to find a set of optimal, so-called Pareto-optimal solutions, instead of a single optimal solution. The two competing objectives are quality and compression ratio maximization. The optimization will be achieved using a new patent pending codec, called MIJ2K, also outlined in this paper. Video will be compressed with the MIJ2K codec applied to some classical videos used for performance measurement, selected from the Xiph.org Foundation repository. The result of the optimization will be a set of near-optimal encoder parameters. We also present the convergence of NSGA-II with different encoder parameters and discuss the suitability of MOEAs as opposed to classical search-based techniques in this field.This work was supported in part by Projects CICYT TIN2008- 06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, SINPROB, CAM MADRINET S-0505/TIC/0255 and DPS2008-07029-C02-02.publicad

    Algorithms for compression of high dynamic range images and video

    Get PDF
    The recent advances in sensor and display technologies have brought upon the High Dynamic Range (HDR) imaging capability. The modern multiple exposure HDR sensors can achieve the dynamic range of 100-120 dB and LED and OLED display devices have contrast ratios of 10^5:1 to 10^6:1. Despite the above advances in technology the image/video compression algorithms and associated hardware are yet based on Standard Dynamic Range (SDR) technology, i.e. they operate within an effective dynamic range of up to 70 dB for 8 bit gamma corrected images. Further the existing infrastructure for content distribution is also designed for SDR, which creates interoperability problems with true HDR capture and display equipment. The current solutions for the above problem include tone mapping the HDR content to fit SDR. However this approach leads to image quality associated problems, when strong dynamic range compression is applied. Even though some HDR-only solutions have been proposed in literature, they are not interoperable with current SDR infrastructure and are thus typically used in closed systems. Given the above observations a research gap was identified in the need for efficient algorithms for the compression of still images and video, which are capable of storing full dynamic range and colour gamut of HDR images and at the same time backward compatible with existing SDR infrastructure. To improve the usability of SDR content it is vital that any such algorithms should accommodate different tone mapping operators, including those that are spatially non-uniform. In the course of the research presented in this thesis a novel two layer CODEC architecture is introduced for both HDR image and video coding. Further a universal and computationally efficient approximation of the tone mapping operator is developed and presented. It is shown that the use of perceptually uniform colourspaces for internal representation of pixel data enables improved compression efficiency of the algorithms. Further proposed novel approaches to the compression of metadata for the tone mapping operator is shown to improve compression performance for low bitrate video content. Multiple compression algorithms are designed, implemented and compared and quality-complexity trade-offs are identified. Finally practical aspects of implementing the developed algorithms are explored by automating the design space exploration flow and integrating the high level systems design framework with domain specific tools for synthesis and simulation of multiprocessor systems. The directions for further work are also presented

    Video quality for video analysis

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Receiver-Driven Video Adaptation

    Get PDF
    In the span of a single generation, video technology has made an incredible impact on daily life. Modern use cases for video are wildly diverse, including teleconferencing, live streaming, virtual reality, home entertainment, social networking, surveillance, body cameras, cloud gaming, and autonomous driving. As these applications continue to grow more sophisticated and heterogeneous, a single representation of video data can no longer satisfy all receivers. Instead, the initial encoding must be adapted to each receiver's unique needs. Existing adaptation strategies are fundamentally flawed, however, because they discard the video's initial representation and force the content to be re-encoded from scratch. This process is computationally expensive, does not scale well with the number of videos produced, and throws away important information embedded in the initial encoding. Therefore, a compelling need exists for the development of new strategies that can adapt video content without fully re-encoding it. To better support the unique needs of smart receivers, diverse displays, and advanced applications, general-use video systems should produce and offer receivers a more flexible compressed representation that supports top-down adaptation strategies from an original, compressed-domain ground truth. This dissertation proposes an alternate model for video adaptation that addresses these challenges. The key idea is to treat the initial compressed representation of a video as the ground truth, and allow receivers to drive adaptation by dynamically selecting which subsets of the captured data to receive. In support of this model, three strategies for top-down, receiver-driven adaptation are proposed. First, a novel, content-agnostic entropy coding technique is implemented in which symbols are selectively dropped from an input abstract symbol stream based on their estimated probability distributions to hit a target bit rate. Receivers are able to guide the symbol dropping process by supplying the encoder with an appropriate rate controller algorithm that fits their application needs and available bandwidths. Next, a domain-specific adaptation strategy is implemented for H.265/HEVC coded video in which the prediction data from the original source is reused directly in the adapted stream, but the residual data is recomputed as directed by the receiver. By tracking the changes made to the residual, the encoder can compensate for decoder drift to achieve near-optimal rate-distortion performance. Finally, a fully receiver-driven strategy is proposed in which the syntax elements of a pre-coded video are cataloged and exposed directly to clients through an HTTP API. Instead of requesting the entire stream at once, clients identify the exact syntax elements they wish to receive using a carefully designed query language. Although an implementation of this concept is not provided, an initial analysis shows that such a system could save bandwidth and computation when used by certain targeted applications.Doctor of Philosoph

    Network streaming and compression for mixed reality tele-immersion

    Get PDF
    Bulterman, D.C.A. [Promotor]Cesar, P.S. [Copromotor
    corecore