984 research outputs found

    A fractal image compression algorithm based on improved imperialist competitive algorithm

    Get PDF
    Fractal image compression (FIC) is a lossy compression method that has the potential to improve the performance of image transmission and image storage and provide security against illicit monitoring. The important features of FIC are high compression ratio and high resolution of decompressed images but the main problem of FIC is the computational complexity of the algorithm. Besides that, the FIC also suffers from a high number of Mean Square Error (MSE) computations for the best matching search between range blocks and domain blocks, which limits the algorithm. In this thesis, two approaches are proposed. Firstly, a new algorithm based on Imperialist competitive algorithm (ICA) is introduced. This is followed by a two-tier algorithm as the second approach to improve further the performance of the algorithm and reduce the MSE computation of FIC. In the first tier, based on edge property, all the range and domain blocks are classified using Discrete Cosine Transform. In the second tier, ICA is used according to the classified blocks. In the ICA, the solution is divided into two groups known as developed and undeveloped countries to maintain the quality of the retrieved image and accelerate the algorithm operation. The MSE value is only calculated for the developed countries. Experimental results show that the proposed algorithm performed better than Genetic algorithms (GAs) and Full-search algorithm in terms of MSE computation. Moreover, in terms of Peak Signal-to-Noise Ratio, the approaches produced high quality decompressed image which is better than that of the GAs

    Fractal Analysis

    Get PDF
    Fractal analysis is becoming more and more common in all walks of life. This includes biomedical engineering, steganography and art. Writing one book on all these topics is a very difficult task. For this reason, this book covers only selected topics. Interested readers will find in this book the topics of image compression, groundwater quality, establishing the downscaling and spatio-temporal scale conversion models of NDVI, modelling and optimization of 3T fractional nonlinear generalized magneto-thermoelastic multi-material, algebraic fractals in steganography, strain induced microstructures in metals and much more. The book will definitely be of interest to scientists dealing with fractal analysis, as well as biomedical engineers or IT engineers. I encourage you to view individual chapters

    Genetic algorithm and tabu search approaches to quantization for DCT-based image compression

    Get PDF
    Today there are several formal and experimental methods for image compression, some of which have grown to be incorporated into the Joint Photographers Experts Group (JPEG) standard. Of course, many compression algorithms are still used only for experimentation mainly due to various performance issues. Lack of speed while compressing or expanding an image, poor compression rate, and poor image quality after expansion are a few of the most popular reasons for skepticism about a particular compression algorithm. This paper discusses current methods used for image compression. It also gives a detailed explanation of the discrete cosine transform (DCT), used by JPEG, and the efforts that have recently been made to optimize related algorithms. Some interesting articles regarding possible compression enhancements will be noted, and in association with these methods a new implementation of a JPEG-like image coding algorithm will be outlined. This new technique involves adapting between one and sixteen quantization tables for a specific image using either a genetic algorithm (GA) or tabu search (TS) approach. First, a few schemes including pixel neighborhood and Kohonen self-organizing map (SOM) algorithms will be examined to find their effectiveness at classifying blocks of edge-detected image data. Next, the GA and TS algorithms will be tested to determine their effectiveness at finding the optimum quantization table(s) for a whole image. A comparison of the techniques utilized will be thoroughly explored

    Interactive visualisation of oligomer frequency in DNA

    Get PDF
    Since 1990, bioinformaticians have been exploring applications of the Chaos Game Representation (CGR) for visualisation, statistical characterisation and comparison of DNA sequences. We focus on the development of a new computational algorithm and description of new software tool that enables CGR visualisation of frequencies of K-mers (oligomers) in a flexible way such that it is possible to visualise the whole genome or any of its parts (like genes), and parallel comparison of several sequences, all in real time. User can interactively specify the size and position of visualised region of the DNA sequence, zoom in or out, and change parameters of visualisation. The tool has been written in JAVATM language and is freely available to public

    HUMAN FACE RECOGNITION BASED ON FRACTAL IMAGE CODING

    Get PDF
    Human face recognition is an important area in the field of biometrics. It has been an active area of research for several decades, but still remains a challenging problem because of the complexity of the human face. In this thesis we describe fully automatic solutions that can locate faces and then perform identification and verification. We present a solution for face localisation using eye locations. We derive an efficient representation for the decision hyperplane of linear and nonlinear Support Vector Machines (SVMs). For this we introduce the novel concept of ρ\rho and η\eta prototypes. The standard formulation for the decision hyperplane is reformulated and expressed in terms of the two prototypes. Different kernels are treated separately to achieve further classification efficiency and to facilitate its adaptation to operate with the fast Fourier transform to achieve fast eye detection. Using the eye locations, we extract and normalise the face for size and in-plane rotations. Our method produces a more efficient representation of the SVM decision hyperplane than the well-known reduced set methods. As a result, our eye detection subsystem is faster and more accurate. The use of fractals and fractal image coding for object recognition has been proposed and used by others. Fractal codes have been used as features for recognition, but we need to take into account the distance between codes, and to ensure the continuity of the parameters of the code. We use a method based on fractal image coding for recognition, which we call the Fractal Neighbour Distance (FND). The FND relies on the Euclidean metric and the uniqueness of the attractor of a fractal code. An advantage of using the FND over fractal codes as features is that we do not have to worry about the uniqueness of, and distance between, codes. We only require the uniqueness of the attractor, which is already an implied property of a properly generated fractal code. Similar methods to the FND have been proposed by others, but what distinguishes our work from the rest is that we investigate the FND in greater detail and use our findings to improve the recognition rate. Our investigations reveal that the FND has some inherent invariance to translation, scale, rotation and changes to illumination. These invariances are image dependent and are affected by fractal encoding parameters. The parameters that have the greatest effect on recognition accuracy are the contrast scaling factor, luminance shift factor and the type of range block partitioning. The contrast scaling factor affect the convergence and eventual convergence rate of a fractal decoding process. We propose a novel method of controlling the convergence rate by altering the contrast scaling factor in a controlled manner, which has not been possible before. This helped us improve the recognition rate because under certain conditions better results are achievable from using a slower rate of convergence. We also investigate the effects of varying the luminance shift factor, and examine three different types of range block partitioning schemes. They are Quad-tree, HV and uniform partitioning. We performed experiments using various face datasets, and the results show that our method indeed performs better than many accepted methods such as eigenfaces. The experiments also show that the FND based classifier increases the separation between classes. The standard FND is further improved by incorporating the use of localised weights. A local search algorithm is introduced to find a best matching local feature using this locally weighted FND. The scores from a set of these locally weighted FND operations are then combined to obtain a global score, which is used as a measure of the similarity between two face images. Each local FND operation possesses the distortion invariant properties described above. Combined with the search procedure, the method has the potential to be invariant to a larger class of non-linear distortions. We also present a set of locally weighted FNDs that concentrate around the upper part of the face encompassing the eyes and nose. This design was motivated by the fact that the region around the eyes has more information for discrimination. Better performance is achieved by using different sets of weights for identification and verification. For facial verification, performance is further improved by using normalised scores and client specific thresholding. In this case, our results are competitive with current state-of-the-art methods, and in some cases outperform all those to which they were compared. For facial identification, under some conditions the weighted FND performs better than the standard FND. However, the weighted FND still has its short comings when some datasets are used, where its performance is not much better than the standard FND. To alleviate this problem we introduce a voting scheme that operates with normalised versions of the weighted FND. Although there are no improvements at lower matching ranks using this method, there are significant improvements for larger matching ranks. Our methods offer advantages over some well-accepted approaches such as eigenfaces, neural networks and those that use statistical learning theory. Some of the advantages are: new faces can be enrolled without re-training involving the whole database; faces can be removed from the database without the need for re-training; there are inherent invariances to face distortions; it is relatively simple to implement; and it is not model-based so there are no model parameters that need to be tweaked

    A Novel DNA Sequence Compression Method Based on Chaos Game Representation

    Get PDF
    Unique signature images derived out of Chaos Game Representation of bio-sequences is an area of research that has been confined to pattern recognition applications. In this paper we pose and answer an interesting question – can we reproduce a bio-sequence in a lossless way given the co-ordinates of the final point in its CGR image? We show that it is possible in principle, but would need enormous resolution for representation of coordinates, roughly corresponding to the information content of direct binary coding of the sequence. We go on to show that we can code nucleotide codon triplets using this method in which 16 codons can be coded using 4 bits, the remaining 48 using 6 bits. Theoretically up to 11% compression is possible with this method. However, algorithm overheads reduce this to very nominal compression percentage of less than 4% for human genome and 9% for bacterial genome. We report the results on a subset of standard test sequences and also an independent wider data set

    Swarm Intelligence in Wavelet Based Video Coding

    Get PDF
    corecore