3,858 research outputs found

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques

    An investigation into the requirements for an efficient image transmission system over an ATM network

    Get PDF
    This thesis looks into the problems arising in an image transmission system when transmitting over an A TM network. Two main areas were investigated: (i) an alternative coding technique to reduce the bit rate required; and (ii) concealment of errors due to cell loss, with emphasis on processing in the transform domain of DCT-based images. [Continues.

    A Monte Carlo method for critical systems in infinite volume: the planar Ising model

    Get PDF
    In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three- and four-point functions of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.Comment: 43 pages, 21 figure

    Apparent sharpness of 3D video when one eye's view is more blurry.

    Get PDF
    When the images presented to each eye differ in sharpness, the fused percept remains relatively sharp. Here, we measure this effect by showing stereoscopic videos that have been blurred for one eye, or both eyes, and psychophysically determining when they appear equally sharp. For a range of blur magnitudes, the fused percept always appeared significantly sharper than the blurrier view. From these data, we investigate to what extent discarding high spatial frequencies from just one eye's view reduces the bandwidth necessary to transmit perceptually sharp 3D content. We conclude that relatively high-resolution video transmission has the most potential benefit from this method

    An investigation into Quadtree fractal image and video compression

    Get PDF
    Digital imaging is the representation of drawings, photographs and pictures in a format that can be displayed and manipulated using a conventional computer. Digital imaging has enjoyed increasing popularity over recent years, with the explosion of digital photography, the Internet and graphics-intensive applications and games. Digitised images, like other digital media, require a relatively large amount of storage space. These storage requirements can become problematic as demands for higher resolution images increases and the resolution capabilities of digital cameras improve. It is not uncommon for a personal computer user to have a collection of thousands of digital images, mainly photographs, whilst the Internet’s Web pages present a practically infinite source. These two factors 一 image size and abundance 一 inevitably lead to a storage problem. As with other large files, data compression can help reduce these storage requirements. Data compression aims to reduce the overall storage requirements for a file by minimising redundancy. The most popular image compression method, JPEG, can reduce the storage requirements for a photographic image by a factor of ten whilst maintaining the appearance of the original image 一 or can deliver much greater levels of compression with a slight loss of quality as a trade-off. Whilst JPEG's efficiency has made it the definitive image compression algorithm, there is always a demand for even greater levels of compression and as a result new image compression techniques are constantly being explored. One such technique utilises the unique properties of Fractals. Fractals are relatively small mathematical formulae that can be used to generate abstract and often colourful images with infinite levels of detail. This property is of interest in the area of image compression because a detailed, high-resolution image can be represented by a few thousand bytes of formulae and coefficients rather than the more typical multi-megabyte filesizes. The real challenge associated with Fractal image compression is to determine the correct set of formulae and coefficients to represent the image a user is trying to compress; it is trivial to produce an image from a given formula but it is much, much harder to produce a formula from a given image. เท theory, Fractal compression can outperform JPEG for a given image and quality level, if the appropiate formulae can be determined. Fractal image compression can also be applied to digital video sequences, which are typically represented by a long series of digital images 一 or 'frames'

    Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience

    Get PDF
    This essay is presented with two principal objectives in mind: first, to document the prevalence of fractals at all levels of the nervous system, giving credence to the notion of their functional relevance; and second, to draw attention to the as yet still unresolved issues of the detailed relationships among power law scaling, self-similarity, and self-organized criticality. As regards criticality, I will document that it has become a pivotal reference point in Neurodynamics. Furthermore, I will emphasize the not yet fully appreciated significance of allometric control processes. For dynamic fractals, I will assemble reasons for attributing to them the capacity to adapt task execution to contextual changes across a range of scales. The final Section consists of general reflections on the implications of the reviewed data, and identifies what appear to be issues of fundamental importance for future research in the rapidly evolving topic of this review

    Herding as a Learning System with Edge-of-Chaos Dynamics

    Full text link
    Herding defines a deterministic dynamical system at the edge of chaos. It generates a sequence of model states and parameters by alternating parameter perturbations with state maximizations, where the sequence of states can be interpreted as "samples" from an associated MRF model. Herding differs from maximum likelihood estimation in that the sequence of parameters does not converge to a fixed point and differs from an MCMC posterior sampling approach in that the sequence of states is generated deterministically. Herding may be interpreted as a"perturb and map" method where the parameter perturbations are generated using a deterministic nonlinear dynamical system rather than randomly from a Gumbel distribution. This chapter studies the distinct statistical characteristics of the herding algorithm and shows that the fast convergence rate of the controlled moments may be attributed to edge of chaos dynamics. The herding algorithm can also be generalized to models with latent variables and to a discriminative learning setting. The perceptron cycling theorem ensures that the fast moment matching property is preserved in the more general framework
    • …
    corecore