180 research outputs found

    Mesh-based video coding for low bit-rate communications

    Get PDF
    In this paper, a new method for low bit-rate content-adaptive mesh-based video coding is proposed. Intra-frame coding of this method employs feature map extraction for node distribution at specific threshold levels to achieve higher density placement of initial nodes for regions that contain high frequency features and conversely sparse placement of initial nodes for smooth regions. Insignificant nodes are largely removed using a subsequent node elimination scheme. The Hilbert scan is then applied before quantization and entropy coding to reduce amount of transmitted information. For moving images, both node position and color parameters of only a subset of nodes may change from frame to frame. It is sufficient to transmit only these changed parameters. The proposed method is well-suited for video coding at very low bit rates, as processing results demonstrate that it provides good subjective and objective image quality at a lower number of required bits

    Management of multimedia resources: from a generic information model to its application to an MPEG2 video codec

    Get PDF
    New open service architectures provide a management framework for telecommunications services, telecommunications networks and computing resources. However, the introduction of multimedia applications in these architectures will require the management of the underlying multimedia resources (e.g., codecs, converters, etc). Multimedia resources are the basic components that support multimedia communications. In this paper, we tackle this issue by proposing a generic management information model for multimedia resources and then instantiate it for the management of an MPEG2 video codec. This information model provides a data representation of the multimedia resources in order to manage them efficiently

    A sequential algorithm for training the SOM prototypes based on higher-order recursive equations

    Get PDF
    A novel training algorithm is proposed for the formation of Self-Organizing Maps (SOM). In the proposed model, the weights are updated incrementally by using a higher-order difference equation, which implements a low-pass digital filter. It is possible to improve selected features of the self-organization process with respect to the basic SOM by suitably designing the filter. Moreover, from this model, new visualization tools can be derived for cluster visualization and for monitoring the quality of the map

    Multimedia resources: An information model and its application to an MPEG2 video codec

    Get PDF
    Still today, diagnosing a problem with multimedia resources, such as video and sound cards, is insufficiently automated. These resources therefore cannot be accurately managed. One reason for this is the lack of their thorough modeling. In this paper, we fulfill this need, by proposing a generic information model, which we further apply to an MPEG2 video codec. We highlight the main characteristics of this kind of codec, identify parameters that influence these characteristics, and reveal some of the trade-offs that the application developer can consider in order to design efficient software for MPEG2 codecs. In addition to the benefits of this modeling for the user and the application developer, we also show how useful it could be for the providers of distribution services, such as live video transmission. These providers can use our model to achieve resource management on an end-to-end basis

    Output Filter Aware Optimization of the Noise Shaping Properties of {\Delta}{\Sigma} Modulators via Semi-Definite Programming

    Full text link
    The Noise Transfer Function (NTF) of {\Delta}{\Sigma} modulators is typically designed after the features of the input signal. We suggest that in many applications, and notably those involving D/D and D/A conversion or actuation, the NTF should instead be shaped after the properties of the output/reconstruction filter. To this aim, we propose a framework for optimal design based on the Kalman-Yakubovich-Popov (KYP) lemma and semi-definite programming. Some examples illustrate how in practical cases the proposed strategy can outperform more standard approaches.Comment: 14 pages, 18 figures, journal. Code accompanying the paper is available at http://pydsm.googlecode.co

    Adaptive design of delta sigma modulators

    Full text link
    In this thesis, a genetic algorithm based on differential evolution (DE) is used to generate delta sigma modulator (DSM) noise transfer functions (NTFs). These NTFs outperform those generated by an iterative approach described by Schreier and implemented in the delsig Matlab toolbox. Several lowpass and bandpass DSMs, as well as DSM\u27s designed specifically for and very low intermediate frequency (VLIF) receivers are designed using the algorithm developed in this thesis and compared to designs made using the delsig toolbox. The NTFs designed using the DE algorithm always have a higher dynamic range and signal to noise ratio than those designed using the delsig toolbox

    Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions

    Get PDF
    Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast to O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data

    Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions

    Get PDF
    Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed---either explicitly or implicitly---to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis

    Layer Selection in Progressive Transmission of Motion-Compensated JPEG2000 Video

    Get PDF
    MCJ2K (Motion-Compensated JPEG2000) is a video codec based on MCTF (Motion- Compensated Temporal Filtering) and J2K (JPEG2000). MCTF analyzes a sequence of images, generating a collection of temporal sub-bands, which are compressed with J2K. The R/D (Rate-Distortion) performance in MCJ2K is better than the MJ2K (Motion JPEG2000) extension, especially if there is a high level of temporal redundancy. MCJ2K codestreams can be served by standard JPIP (J2K Interactive Protocol) servers, thanks to the use of only J2K standard file formats. In bandwidth-constrained scenarios, an important issue in MCJ2K is determining the amount of data of each temporal sub-band that must be transmitted to maximize the quality of the reconstructions at the client side. To solve this problem, we have proposed two rate-allocation algorithms which provide reconstructions that are progressive in quality. The first, OSLA (Optimized Sub-band Layers Allocation), determines the best progression of quality layers, but is computationally expensive. The second, ESLA (Estimated-Slope sub-band Layers Allocation), is sub-optimal in most cases, but much faster and more convenient for real-time streaming scenarios. An experimental comparison shows that even when a straightforward motion compensation scheme is used, the R/D performance of MCJ2K competitive is compared not only to MJ2K, but also with respect to other standard scalable video codecs
    corecore