101 research outputs found

    Kompresi Fractal dengan Metode Partisi Adaptive Quadtree

    Get PDF
    Kompresi fractal merupakan teknik yang tergolong baru dalam sejarah kompresi citra. Dengan memanfa-atkan kesamaan bentuk dengan bagian yang lebih kecil pada citra, metode kompresi  fractal diperkenalkan. Dalam kompresi fractal, bagian kecil yang memiliki  suatu kemiripan dengan bagian yang lebih besar pada citra akan dilakukan transformasi scan berulang-ulang  sehingga membentuk bagian besar tersebut. Proses ini disebut juga dengan iterated function system. Penelitian ini membahas mengenai pengompresan  citra hitam putih menggunakan metode partisi adaptive quadtree. Maksud dari metode ini adalah compresi fractal yang skema partisinya menggunakan skema partisi quadtree  dengan threshold yang berubah­ubah  (adaptive  threshold).  Penelitian  ini  merancang  skema  partisi  quadtree menggunakan adaptive threshold untuk lebih mengoptimalkan hasil suatu kompresi citra baik dari segi kualitas, rasio, dan waktu pengompresan. Pada skema partisi standar. suatu citra dipartisi menjadi bagian-bagian berbentuk persegi dengan ukuran yang sama. Partisi jenis ini memiliki kelemahan utama yaitu terpartisinya bagian citra yang sebenarnya memiliki tingkat kompleksitas rendah. Hal ini sangat tidak efisien dikarenakan  pemborosan bit yang terjadi untuk merekam koefisien dari bagian-bagian tersebut untuk pengompresan sehingga menyebabkan ukuran file menjadi besar. Skema partisi quadtree memungkinkan  untuk mempartisi citra menjadi bagian-bagian yang memiliki ukuran yang berbeda sesuai dengan tingkat kompleksitas bagian  tersebut  berdasarkan  threshold  yang  sudah  ditetapkan  sebelumny

    Nouvelles fonctions de collage pour les schémas de compression d'images par IFS

    Get PDF
    Dans cet article, nous présentons un développement du schéma classique de compression d'images en niveaux de gris par IFS. Nous proposons une amélioration des fonctions de collage traditionnellement utilisées afin d'en accroître les performances sur les zones à contenu haute fréquence. Pour cela, nous introduisons des fonctions harmoniques ainsi qu'une nouvelle méthode de résolution du problème inverse

    Semi-Supervised Learning of Cartesian Factors

    Get PDF
    The existence of place cells (PCs), grid cells (GCs), border cells (BCs), and head direction cells (HCs) as well as the dependencies between them have been enigmatic. We make an effort to explain their nature by introducing the concept of Cartesian Factors. These factors have specific properties: (i) they assume and complement each other, like direction and position and (ii) they have localized discrete representations with predictive attractors enabling implicit metric-like computations. In our model, HCs make the distributed and local representation of direction. Predictive attractor dynamics on that network forms the Cartesian Factor "direction." We embed these HCs and idiothetic visual information into a semi-supervised sparse autoencoding comparator structure that compresses its inputs and learns PCs, the distributed local and direction independent (allothetic) representation of the Cartesian Factor of global space. We use a supervised, information compressing predictive algorithm and form direction sensitive (oriented) GCs from the learned PCs by means of an attractor-like algorithm. Since the algorithm can continue the grid structure beyond the region of the PCs, i.e.,beyond its learning domain, thus the GCs and the PCs together form our metric-like Cartesian Factors of space. We also stipulate that the same algorithm can produce BCs. Our algorithm applies (a) a bag representation that models the "what system" and (b) magnitude ordered place cell activities that model either the integrate-and-fire mechanism, or theta phase precession, or both. We relate the components of the algorithm to the entorhinal-hippocampal complex and to its working. The algorithm requires both spatial and lifetime sparsification that may gain support from the two-stage memory formation of this complex

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques

    Improved Fractal Image Compression: Centered BFT with Quadtrees

    Get PDF
    Computer Scienc

    Attractor image coding with low blocking effects.

    Get PDF
    by Ho, Hau Lai.Thesis (M.Phil.)--Chinese University of Hong Kong, 1997.Includes bibliographical references (leaves 97-103).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Overview of Attractor Image Coding --- p.2Chapter 1.2 --- Scope of Thesis --- p.3Chapter 2 --- Fundamentals of Attractor Coding --- p.6Chapter 2.1 --- Notations --- p.6Chapter 2.2 --- Mathematical Preliminaries --- p.7Chapter 2.3 --- Partitioned Iterated Function Systems --- p.10Chapter 2.3.1 --- Mathematical Formulation of the PIFS --- p.12Chapter 2.4 --- Attractor Coding using the PIFS --- p.16Chapter 2.4.1 --- Quadtree Partitioning --- p.18Chapter 2.4.2 --- Inclusion of an Orthogonalization Operator --- p.19Chapter 2.5 --- Coding Examples --- p.21Chapter 2.5.1 --- Evaluation Criterion --- p.22Chapter 2.5.2 --- Experimental Settings --- p.22Chapter 2.5.3 --- Results and Discussions --- p.23Chapter 2.6 --- Summary --- p.25Chapter 3 --- Attractor Coding with Adjacent Block Parameter Estimations --- p.27Chapter 3.1 --- δ-Minimum Edge Difference --- p.29Chapter 3.1.1 --- Definition --- p.29Chapter 3.1.2 --- Theoretical Analysis --- p.31Chapter 3.2 --- Adjacent Block Parameter Estimation Scheme --- p.33Chapter 3.2.1 --- Joint Optimization --- p.34Chapter 3.2.2 --- Predictive Coding --- p.36Chapter 3.3 --- Algorithmic Descriptions of the Proposed Scheme --- p.39Chapter 3.4 --- Experimental Results --- p.40Chapter 3.5 --- Summary --- p.50Chapter 4 --- Attractor Coding using Lapped Partitioned Iterated Function Sys- tems --- p.51Chapter 4.1 --- Lapped Partitioned Iterated Function Systems --- p.53Chapter 4.1.1 --- Weighting Operator --- p.54Chapter 4.1.2 --- Mathematical Formulation of the LPIFS --- p.57Chapter 4.2 --- Attractor Coding using the LPIFS --- p.62Chapter 4.2.1 --- Choice of Weighting Operator --- p.64Chapter 4.2.2 --- Range Block Preprocessing --- p.69Chapter 4.2.3 --- Decoder Convergence Analysis --- p.73Chapter 4.3 --- Local Domain Block Searching --- p.74Chapter 4.3.1 --- Theoretical Foundation --- p.75Chapter 4.3.2 --- Local Block Searching Algorithm --- p.77Chapter 4.4 --- Experimental Results --- p.79Chapter 4.5 --- Summary --- p.90Chapter 5 --- Conclusion --- p.91Chapter 5.1 --- Original Contributions --- p.91Chapter 5.2 --- Subjects for Future Research --- p.92Chapter A --- Fundamental Definitions --- p.94Chapter B --- Appendix B --- p.96Bibliography --- p.9

    Parallel implementation of fractal image compression

    Get PDF
    Thesis (M.Sc.Eng.)-University of Natal, Durban, 2000.Fractal image compression exploits the piecewise self-similarity present in real images as a form of information redundancy that can be eliminated to achieve compression. This theory based on Partitioned Iterated Function Systems is presented. As an alternative to the established JPEG, it provides a similar compression-ratio to fidelity trade-off. Fractal techniques promise faster decoding and potentially higher fidelity, but the computationally intensive compression process has prevented commercial acceptance. This thesis presents an algorithm mapping the problem onto a parallel processor architecture, with the goal of reducing the encoding time. The experimental work involved implementation of this approach on the Texas Instruments TMS320C80 parallel processor system. Results indicate that the fractal compression process is unusually well suited to parallelism with speed gains approximately linearly related to the number of processors used. Parallel processing issues such as coherency, management and interfacing are discussed. The code designed incorporates pipelining and parallelism on all conceptual and practical levels ensuring that all resources are fully utilised, achieving close to optimal efficiency. The computational intensity was reduced by several means, including conventional classification of image sub-blocks by content with comparisons across class boundaries prohibited. A faster approach adopted was to perform estimate comparisons between blocks based on pixel value variance, identifying candidates for more time-consuming, accurate RMS inter-block comparisons. These techniques, combined with the parallelism, allow compression of 512x512 pixel x 8 bit images in under 20 seconds, while maintaining a 30dB PSNR. This is up to an order of magnitude faster than reported for conventional sequential processor implementations. Fractal based compression of colour images and video sequences is also considered. The work confirms the potential of fractal compression techniques, and demonstrates that a parallel implementation is appropriate for addressing the compression time problem. The processor system used in these investigations is faster than currently available PC platforms, but the relevance lies in the anticipation that future generations of affordable processors will exceed its performance. The advantages of fractal image compression may then be accessible to the average computer user, leading to commercial acceptance

    Learning as a Nonlinear Line of Attraction for Pattern Association, Classification and Recognition

    Get PDF
    Development of a mathematical model for learning a nonlinear line of attraction is presented in this dissertation, in contrast to the conventional recurrent neural network model in which the memory is stored in an attractive fixed point at discrete location in state space. A nonlinear line of attraction is the encapsulation of attractive fixed points scattered in state space as an attractive nonlinear line, describing patterns with similar characteristics as a family of patterns. It is usually of prime imperative to guarantee the convergence of the dynamics of the recurrent network for associative learning and recall. We propose to alter this picture. That is, if the brain remembers by converging to the state representing familiar patterns, it should also diverge from such states when presented by an unknown encoded representation of a visual image. The conception of the dynamics of the nonlinear line attractor network to operate between stable and unstable states is the second contribution in this dissertation research. These criteria can be used to circumvent the plasticity-stability dilemma by using the unstable state as an indicator to create a new line for an unfamiliar pattern. This novel learning strategy utilizes stability (convergence) and instability (divergence) criteria of the designed dynamics to induce self-organizing behavior. The self-organizing behavior of the nonlinear line attractor model can manifest complex dynamics in an unsupervised manner. The third contribution of this dissertation is the introduction of the concept of manifold of color perception. The fourth contribution of this dissertation is the development of a nonlinear dimensionality reduction technique by embedding a set of related observations into a low-dimensional space utilizing the result attained by the learned memory matrices of the nonlinear line attractor network. Development of a system for affective states computation is also presented in this dissertation. This system is capable of extracting the user\u27s mental state in real time using a low cost computer. It is successfully interfaced with an advanced learning environment for human-computer interaction

    ICASE/LaRC Workshop on Adaptive Grid Methods

    Get PDF
    Solution-adaptive grid techniques are essential to the attainment of practical, user friendly, computational fluid dynamics (CFD) applications. In this three-day workshop, experts gathered together to describe state-of-the-art methods in solution-adaptive grid refinement, analysis, and implementation; to assess the current practice; and to discuss future needs and directions for research. This was accomplished through a series of invited and contributed papers. The workshop focused on a set of two-dimensional test cases designed by the organizers to aid in assessing the current state of development of adaptive grid technology. In addition, a panel of experts from universities, industry, and government research laboratories discussed their views of needs and future directions in this field
    corecore