2,155 research outputs found

    On the architecture of H.264 to H.264 homogeneous transcoding platform

    Get PDF
    2007-2008 > Academic research: refereed > Invited conference paperVersion of RecordPublishe

    On the architecture of H.264 to H.264 homogeneous transcoding platform

    Full text link

    Coding of synthetic aperture radar data

    Get PDF

    Digital audio watermarking for broadcast monitoring and content identification

    Get PDF
    Copyright legislation was prompted exactly 300 years ago by a desire to protect authors against exploitation of their work by others. With regard to modern content owners, Digital Rights Management (DRM) issues have become very important since the advent of the Internet. Piracy, or illegal copying, costs content owners billions of dollars every year. DRM is just one tool that can assist content owners in exercising their rights. Two categories of DRM technologies have evolved in digital signal processing recently, namely digital fingerprinting and digital watermarking. One area of Copyright that is consistently overlooked in DRM developments is 'Public Performance'. The research described in this thesis analysed the administration of public performance rights within the music industry in general, with specific focus on the collective rights and broadcasting sectors in Ireland. Limitations in the administration of artists' rights were identified. The impact of these limitations on the careers of developing artists was evaluated. A digital audio watermarking scheme is proposed that would meet the requirements of both the broadcast and collective rights sectors. The goal of the scheme is to embed a standard identifier within an audio signal via modification of its spectral properties in such a way that it would be robust and perceptually transparent. Modification of the audio signal spectrum was attempted in a variety of ways. A method based on a super-resolution frequency identification technique was found to be most effective. The watermarking scheme was evaluated for robustness and found to be extremely effective in recovering embedded watermarks in music signals using a semi-blind decoding process. The final digital audio watermarking algorithm proposed facilitates the development of other applications in the domain of broadcast monitoring for the purposes of equitable royalty distribution along with additional applications and extension to other domains

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques

    Fast imaging in non-standard X-ray computed tomography geometries

    Get PDF

    Design, Modeling, and Measurement of a Metamaterial Electromagnetic Field Concentrator

    Get PDF
    This document addresses the need to improve the design process for creating an optimized metamaterial. In particular, two challenges are addressed: creating an electromagnetic concentrator and optimizing the design of metamaterial used to create the electromagnetic concentrator. The first challenge is addressed by developing an electromagnetic field concentrator from a design of concentric geometric shapes. The material forming the concentrator is derived from the application of transformation optics. The resulting anisotropic, spatially variant constitutive parameter tensors are then approximated with metamatieral inclusions using the combination of an AFIT rapid metamaterial design process and a design process created for rapid metamaterial production. The second challenge of optimizing the design of the metamaterial is addressed by considering factors such as circuit board selection, various sets of metamaterial cell geometry combinations, and optimization of the ratio of the widths for the concentric geometric shapes. The resulting optimized design is simulated and shown to compress and concentrate the vertical electric field component of incident plane waves. A physical device is constructed based on the simulations and tested to confirm the entire design process. Experimental data do not definitely show concentration however an optimized design process has been proven

    Scanline calculation of radial influence for image processing

    Full text link
    Efficient methods for the calculation of radial influence are described and applied to two image processing problems, digital halftoning and mixed content image compression. The methods operate recursively on scanlines of image values, spreading intensity from scanline to scanline in proportions approximating a Cauchy distribution. For error diffusion halftoning, experiments show that this recursive scanline spreading provides an ideal pattern of distribution of error. Error diffusion using masks generated to provide this distribution of error alleviate error diffusion "worm" artifacts. The recursive scanline by scanline application of a spreading filter and a complementary filter can be used to reconstruct an image from its horizontal and vertical pixel difference values. When combined with the use of a downsampled image the reconstruction is robust to incomplete and quantized pixel difference data. Such gradient field integration methods are described in detail proceeding from representation of images by gradient values along contours through to a variety of efficient algorithms. Comparisons show that this form of gradient field integration by convolution provides reduced distortion compared to other high speed gradient integration methods. The reduced distortion can be attributed to success in approximating a radial pattern of influence. An approach to edge-based image compression is proposed using integration of gradient data along edge contours and regularly sampled low resolution image data. This edge-based image compression model is similar to previous sketch based image coding methods but allows a simple and efficient calculation of an edge-based approximation image. A low complexity implementation of this approach to compression is described. The implementation extracts and represents gradient data along edge contours as pixel differences and calculates an approximate image by performing integration of pixel difference data by scanline convolution. The implementation was developed as a prototype for compression of mixed content image data in printing systems. Compression results are reported and strengths and weaknesses of the implementation are identified
    corecore