3,169 research outputs found

    Fast Search Approaches for Fractal Image Coding: Review of Contemporary Literature

    Get PDF
    Fractal Image Compression FIC as a model was conceptualized in the 1989 In furtherance there are numerous models that has been developed in the process Existence of fractals were initially observed and depicted in the Iterated Function System IFS and the IFS solutions were used for encoding images The process of IFS pertaining to any image constitutes much lesser space for recording than the actual image which has led to the development of representation the image using IFS form and how the image compression systems has taken shape It is very important that the time consumed for encoding has to be addressed for achieving optimal compression conditions and predominantly the inputs that are shared in the solutions proposed in the study depict the fact that despite of certain developments that has taken place still there are potential chances of scope for improvement From the review of exhaustive range of models that are depicted in the model it is evident that over period of time numerous advancements have taken place in the FCI model and is adapted at image compression in varied levels This study focus on the existing range of literature on FCI and the insights of various models has been depicted in this stud

    A fractal image compression algorithm based on improved imperialist competitive algorithm

    Get PDF
    Fractal image compression (FIC) is a lossy compression method that has the potential to improve the performance of image transmission and image storage and provide security against illicit monitoring. The important features of FIC are high compression ratio and high resolution of decompressed images but the main problem of FIC is the computational complexity of the algorithm. Besides that, the FIC also suffers from a high number of Mean Square Error (MSE) computations for the best matching search between range blocks and domain blocks, which limits the algorithm. In this thesis, two approaches are proposed. Firstly, a new algorithm based on Imperialist competitive algorithm (ICA) is introduced. This is followed by a two-tier algorithm as the second approach to improve further the performance of the algorithm and reduce the MSE computation of FIC. In the first tier, based on edge property, all the range and domain blocks are classified using Discrete Cosine Transform. In the second tier, ICA is used according to the classified blocks. In the ICA, the solution is divided into two groups known as developed and undeveloped countries to maintain the quality of the retrieved image and accelerate the algorithm operation. The MSE value is only calculated for the developed countries. Experimental results show that the proposed algorithm performed better than Genetic algorithms (GAs) and Full-search algorithm in terms of MSE computation. Moreover, in terms of Peak Signal-to-Noise Ratio, the approaches produced high quality decompressed image which is better than that of the GAs

    Data comparison schemes for Pattern Recognition in Digital Images using Fractals

    Get PDF
    Pattern recognition in digital images is a common problem with application in remote sensing, electron microscopy, medical imaging, seismic imaging and astrophysics for example. Although this subject has been researched for over twenty years there is still no general solution which can be compared with the human cognitive system in which a pattern can be recognised subject to arbitrary orientation and scale. The application of Artificial Neural Networks can in principle provide a very general solution providing suitable training schemes are implemented. However, this approach raises some major issues in practice. First, the CPU time required to train an ANN for a grey level or colour image can be very large especially if the object has a complex structure with no clear geometrical features such as those that arise in remote sensing applications. Secondly, both the core and file space memory required to represent large images and their associated data tasks leads to a number of problems in which the use of virtual memory is paramount. The primary goal of this research has been to assess methods of image data compression for pattern recognition using a range of different compression methods. In particular, this research has resulted in the design and implementation of a new algorithm for general pattern recognition based on the use of fractal image compression. This approach has for the first time allowed the pattern recognition problem to be solved in a way that is invariant of rotation and scale. It allows both ANNs and correlation to be used subject to appropriate pre-and post-processing techniques for digital image processing on aspect for which a dedicated programmer's work bench has been developed using X-Designer

    PROCESS-PROPERTY-FABRIC ARCHITECTURE RELATIONSHIPS IN FIBRE-REINFORCED COMPOSITES

    Get PDF
    The use of fibre-reinforced polymer matrix composite materials is growing at a faster rate than GDP in many countries. An improved understanding of their processing and mechanical behaviour would extend the potential applications of these materials. For unidirectional composites, it is predicted that localised absence of fibres is related to longitudinal compression failure. The use of woven reinforcements permits more effective manufacture than for unidirectional fibres. It has been demonstrated experimentally that compression strengths of woven composites are reduced when fibres are clustered. Summerscales predicted that clustering of fibres would increase the permeability of the reinforcement and hence expedite the processing of these materials. Commercial fabrics are available which employ this concept using flow-enhancing bound tows. The net effect of clustering fibres is to enhance processability whilst reducing the mechanical properties. The effects reported above were qualitative correlations. Gross differences in the appearance of laminate sections are apparent for different weave styles. For the quantification of subtle changes in fabric architecture, the use of automated image analysis is essential. Griffm used Voronoi tessellation to measure the microstructures of composites made using flow-enhancing tows. The data was presented as histograms with no single parameter to quantify microstructure. This thesis describes the use of automated image analysis for the measurement of the microstructures of woven fibre-reinforced composites, and pioneers the use of fractal dimensions as a single parameter for their quantification. It further considers the process-property- structure relationships for commercial and experimental fabric reinforcements in an attempt to resolve the processing versus properties dilemma. A new flow-enhancement concept has been developed which has a reduced impact on laminate mechanical properties.University of Bristol and Carr Reinforcements Limite

    Biometric Systems

    Get PDF
    Biometric authentication has been widely used for access control and security systems over the past few years. The purpose of this book is to provide the readers with life cycle of different biometric authentication systems from their design and development to qualification and final application. The major systems discussed in this book include fingerprint identification, face recognition, iris segmentation and classification, signature verification and other miscellaneous systems which describe management policies of biometrics, reliability measures, pressure based typing and signature verification, bio-chemical systems and behavioral characteristics. In summary, this book provides the students and the researchers with different approaches to develop biometric authentication systems and at the same time includes state-of-the-art approaches in their design and development. The approaches have been thoroughly tested on standard databases and in real world applications

    Inter-Frame Video Compression based on Adaptive Fuzzy Inference System Compression of Multiple Frame Characteristics

    Get PDF
    Video compression is used for storage or bandwidth efficiency in clip video information. Video compression involves encoders and decoders. Video compression uses intra-frame, inter-frame, and block-based methods.  Video compression compresses nearby frame pairs into one compressed frame using inter-frame compression. This study defines odd and even neighboring frame pairings. Motion estimation, compensation, and frame difference underpin video compression methods. In this study, adaptive FIS (Fuzzy Inference System) compresses and decompresses each odd-even frame pair. First, adaptive FIS trained on all feature pairings of each odd-even frame pair. Video compression-decompression uses the taught adaptive FIS as a codec. The features utilized are "mean", "std (standard deviation)", "mad (mean absolute deviation)", and "mean (std)". This study uses all video frames' average DCT (Discrete Cosine Transform) components as a quality parameter. The adaptive FIS training feature and amount of odd-even frame pairings affect compression ratio variation. The proposed approach achieves CR=25.39% and P=80.13%. "Mean" performs best overall (P=87.15%). "Mean (mad)" has the best compression ratio (CR=24.68%) for storage efficiency. The "std" feature compresses the video without decompression since it has the lowest quality change (Q_dct=10.39%)

    An Optical Machine Vision System for Applications in Cytopathology

    Get PDF
    This paper discusses a new approach to the processes of object detection, recognition and classification in a digital image focusing on problem in Cytopathology. A unique self learning procedure is presented in order to incorporate expert knowledge. The classification method is based on the application of a set of features which includes fractal parameters such as the Lacunarity and Fourier dimension. Thus, the approach includes the characterisation of an object in terms of its fractal properties and texture characteristics. The principal issues associated with object recognition are presented which include the basic model and segmentation algorithms. The self-learning procedure for designing a decision making engine using fuzzy logic and membership function theory is also presented and a novel technique for the creation and extraction of information from a membership function considered. The methods discussed and the algorithms developed have a range of applications and in this work, we focus the engineering of a system for automating a Papanicolaou screening test
    corecore