630 research outputs found

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques

    Dynamical Systems and Numerical Analysis: the Study of Measures generated by Uncountable I.F.S

    Full text link
    Measures generated by Iterated Function Systems composed of uncountably many one--dimensional affine maps are studied. We present numerical techniques as well as rigorous results that establish whether these measures are absolutely or singular continuous.Comment: to appear in Numerical Algorithm

    COLLAGE-BASED INVERSE PROBLEMS FOR IFSM WITH ENTROPY MAXIMIZATION AND SPARSITY CONSTRAINTS

    Get PDF
    We consider the inverse problem associated with IFSM: Given a target function f, find an IFSM, such that its invariant fixed point f is sufficiently close to f in the Lp distance. In this paper, we extend the collage-based method developed by Forte and Vrscay (1995) along two different directions. We first search for a set of mappings that not only minimizes the collage error but also maximizes the entropy of the dynamical system. We then include an extra term in the minimization process which takes into account the sparsity of the set of mappings. In this new formulation, the minimization of collage error is treated as multi-criteria problem: we consider three different and conflicting criteria i.e., collage error, entropy and sparsity. To solve this multi-criteria program we proceed by scalarization and we reduce the model to a single-criterion program by combining all objective functions with different trade-off weights. The results of some numerical computations are presented. Numerical studies indicate that a maximum entropy principle exists for this approximation problem, i.e., that the suboptimal solutions produced by collage coding can be improved at least slightly by adding a maximum entropy criterion

    MINKOWSKI-ADDITIVE MULTIMEASURES, MONOTONICITY AND SELF-SIMILARITY

    Full text link

    Self-similarity and wavelet forms for the compression of still image and video data

    Get PDF
    This thesis is concerned with the methods used to reduce the data volume required to represent still images and video sequences. The number of disparate still image and video coding methods increases almost daily. Recently, two new strategies have emerged and have stimulated widespread research. These are the fractal method and the wavelet transform. In this thesis, it will be argued that the two methods share a common principle: that of self-similarity. The two will be related concretely via an image coding algorithm which combines the two, normally disparate, strategies. The wavelet transform is an orientation selective transform. It will be shown that the selectivity of the conventional transform is not sufficient to allow exploitation of self-similarity while keeping computational cost low. To address this, a new wavelet transform is presented which allows for greater orientation selectivity, while maintaining the orthogonality and data volume of the conventional wavelet transform. Many designs for vector quantizers have been published recently and another is added to the gamut by this work. The tree structured vector quantizer presented here is on-line and self structuring, requiring no distinct training phase. Combining these into a still image data compression system produces results which are among the best that have been published to date. An extension of the two dimensional wavelet transform to encompass the time dimension is straightforward and this work attempts to extrapolate some of its properties into three dimensions. The vector quantizer is then applied to three dimensional image data to produce a video coding system which, while not optimal, produces very encouraging results

    The Measure of a Measurement

    Full text link
    While finite non-commutative operator systems lie at the foundation of quantum measurement, they are also tools for understanding geometric iterations as used in the theory of iterated function systems (IFSs) and in wavelet analysis. Key is a certain splitting of the total Hilbert space and its recursive iterations to further iterated subdivisions. This paper explores some implications for associated probability measures (in the classical sense of measure theory), specifically their fractal components. We identify a fractal scale ss in a family of Borel probability measures μ\mu on the unit interval which arises independently in quantum information theory and in wavelet analysis. The scales ss we find satisfy s∈R+s\in \mathbb{R}_{+} and s≠1s\not =1, some s1s 1. We identify these scales ss by considering the asymptotic properties of μ(J)/∣J∣s\mu(J) /| J| ^{s} where JJ are dyadic subintervals, and ∣J∣→0| J| \to0.Comment: 18 pages, 3 figures, and reference

    MINKOWSKI-ADDITIVE MULTIMEASURES, MONOTONICITY AND SELF-SIMILARITY

    Get PDF
    We discuss the main properties of positive multimeasures and we show how to define a notion of self-similarity based on a generalized Markov operator

    Function-valued Mappings and SSIM-based Optimization in Imaging

    Get PDF
    In a few words, this thesis is concerned with two alternative approaches to imag- ing, namely, Function-valued Mappings (FVMs) and Structural Similarity Index Measure (SSIM)-based Optimization. Briefly, a FVM is a mathematical object that assigns to each element in its domain a function that belongs to a given function space. The advantage of this representation is that the infinite dimensionality of the range of FVMs allows us to give a more accurate description of complex datasets such as hyperspectral images and diffusion magnetic resonance images, something that can not be done with the classical representation of such data sets as vector-valued functions. For instance, a hyperspectral image can be described as a FVM that assigns to each point in a spatial domain a spectral function that belongs to the function space L2(R); that is, the space of functions whose energy is finite. Moreoever, we present a Fourier transform and a new class of fractal transforms for FVMs to analyze and process hyperspectral images. Regarding SSIM-based optimization, we introduce a general framework for solving op- timization problems that involve the SSIM as a fidelity measure. This framework offers the option of carrying out SSIM-based imaging tasks which are usually addressed using the classical Euclidean-based methods. In the literature, SSIM-based approaches have been proposed to address the limitations of Euclidean-based metrics as measures of vi- sual quality. These methods show better performance when compared to their Euclidean counterparts since the SSIM is a better model of the human visual system; however, these approaches tend to be developed for particular applications. With the general framework that it is presented in this thesis, rather than focusing on particular imaging tasks, we introduce a set of novel algorithms capable of carrying out a wide range of SSIM-based imaging applications. Moreover, such a framework allows us to include the SSIM as a fidelity term in optimization problems in which it had not been included before

    Target detection in clutter for sonar imagery

    Get PDF
    This thesis is concerned with the analysis of side-looking sonar images, and specif- ically with the identification of the types of seabed that are present in such images, and with the detection of man-made objects in such images. Side-looking sonar images are, broadly speaking, the result of the physical interaction between acous- tic waves and the bottom of the sea. Because of this interaction, the types of seabed appear as textured areas in side-looking sonar images. The texture descrip- tors commonly used in the field of sonar imagery fail at accurately identifying the types of seabed because the types of seabed, hence the textures, are extremely variable. In this thesis, we did not use the traditional texture descriptors to identify the types of seabed. We rather used scattering operators which recently appeared in the field of signal and image processing. We assessed how well the types of seabed are identified through two inference algorithms, one based on affine spaces, and the other based on the concept of similarity by composition. This thesis is also concerned with the detection of man-made objects in side-looking sonar im- ages. An object detector may be described as a method which, when applied to a certain number of sonar images, produces a set of detections. Some of these are true positives, and correspond to real objects. Others are false positives, and do not correspond to real objects. The present object detectors suffer from a high false positive rate in complex environments, that is to say, complex types of seabed. The hypothesis we will follow is that it is possible to reduce the number of false positives through a characterisation of the similarity between the detections and the seabed, the false positives being by nature part of the seabed. We will use scattering operators to represent the detections and the same two inference algorithms to quantify how similar the detections are to the seabed
    • …
    corecore