265 research outputs found

    Combining Fractal Coding and Orthogonal Linear Transforms

    Get PDF

    Fast Search Approaches for Fractal Image Coding: Review of Contemporary Literature

    Get PDF
    Fractal Image Compression FIC as a model was conceptualized in the 1989 In furtherance there are numerous models that has been developed in the process Existence of fractals were initially observed and depicted in the Iterated Function System IFS and the IFS solutions were used for encoding images The process of IFS pertaining to any image constitutes much lesser space for recording than the actual image which has led to the development of representation the image using IFS form and how the image compression systems has taken shape It is very important that the time consumed for encoding has to be addressed for achieving optimal compression conditions and predominantly the inputs that are shared in the solutions proposed in the study depict the fact that despite of certain developments that has taken place still there are potential chances of scope for improvement From the review of exhaustive range of models that are depicted in the model it is evident that over period of time numerous advancements have taken place in the FCI model and is adapted at image compression in varied levels This study focus on the existing range of literature on FCI and the insights of various models has been depicted in this stud

    Quadtree partitioning scheme of color image based

    Get PDF
    Image segmentation is an essential complementary process in digital image processing and computer vision, but mostly utilizes simple segmentation techniques, such as fixed partitioning scheme and global thresholding techniques due to their simplicity and popularity, in spite of their inefficiency. This paper introduces a new split-merge segmentation process for a quadtree scheme of colour images, based on exploiting the spatial and spectral information embedded within the bands and between bands, respectively. The results show that this technique is efficient in terms of quality of segmentation and time, which can be used in standard techniques as alternative to a fixed partitioning scheme

    Giving eyes to ICT!, or How does a computer recognize a cow?

    Get PDF
    Het door Schouten en andere onderzoekers op het CWI ontwikkelde systeem berust op het beschrijven van beelden met behulp van fractale meetkunde. De menselijke waarneming blijkt mede daardoor zo efficiënt omdat zij sterk werkt met gelijkenissen. Het ligt dus voor de hand het te zoeken in wiskundige methoden die dat ook doen. Schouten heeft daarom beeldcodering met behulp van 'fractals' onderzocht. Fractals zijn zelfgelijkende meetkundige figuren, opgebouwd door herhaalde transformatie (iteratie) van een eenvoudig basispatroon, dat zich daardoor op steeds kleinere schalen vertakt. Op elk niveau van detaillering lijkt een fractal op zichzelf (Droste-effect). Met fractals kan men vrij eenvoudig bedrieglijk echte natuurvoorstellingen maken. Fractale beeldcodering gaat ervan uit dat het omgekeerde ook geldt: een beeld effectief opslaan in de vorm van de basispatronen van een klein aantal fractals, samen met het voorschrift hoe het oorspronkelijke beeld daaruit te reconstrueren. Het op het CWI in samenwerking met onderzoekers uit Leuven ontwikkelde systeem is mede gebaseerd op deze methode. ISBN 906196502

    Facial Image Retrieval on Semantic Features Using Adaptive Genetic Algorithm

    Get PDF
    The emergence of larger databases has made image retrieval techniques an essential component, and has led to the development of more efficient image retrieval systems. Retrieval can be either content or text-based. In this paper, the focus is on the content-based image retrieval from the FGNET database. Input query images are subjected to several processing techniques in the database before computing the squared Euclidean distance (SED) between them. The images with the shortest Euclidean distance are considered as a match and are retrieved. The processing techniques involve the application of the median modified Weiner filter (MMWF), extraction of the low-level features using histogram-oriented gradients (HOG), discrete wavelet transform (DWT), GIST, and Local tetra pattern (LTrP). Finally, the features are selected using Viola-Jones algorithm. In this study, the average PSNR value obtained after applying Wiener filter was 45.29. The performance of the AGA was evaluated based on its precision, F-measure, and recall, and the obtained average values were respectively 0.75, 0.692, and 0.66. The performance matrix of the AGA was compared to those of particle swarm optimization al-gorithm (PSO) and genetic algorithm (GA) and found to perform better; thus, proving its effi-ciency

    Fast Edge Preserving Fractal System

    Get PDF
    Electrical Engineerin

    Image Area Reduction for Efficient Medical Image Retrieval

    Get PDF
    Content-based image retrieval (CBIR) has been one of the most active areas in medical image analysis in the last two decades because of the steadily increase in the number of digital images used. Efficient diagnosis and treatment planning can be supported by developing retrieval systems to provide high-quality healthcare. Extensive research has attempted to improve the image retrieval efficiency. The critical factors when searching in large databases are time and storage requirements. In general, although many methods have been suggested to increase accuracy, fast retrieval has been rather sporadically investigated. In this thesis, two different approaches are proposed to reduce both time and space requirements for medical image retrieval. The IRMA data set is used to validate the proposed methods. Both methods utilized Local Binary Pattern (LBP) histogram features which are extracted from 14,410 X-ray images of IRMA dataset. The first method is image folding that operates based on salient regions in an image. Saliency is determined by a context-aware saliency algorithm which includes folding the image. After the folding process, the reduced image area is used to extract multi-block and multi-scale LBP features and to classify these features by multi-class Support vector machine (SVM). The other method consists of classification and distance-based feature similarity. Images are firstly classified into general classes by utilizing LBP features. Subsequently, the retrieval is performed within the class to locate the most similar images. Between the retrieval and classification processes, LBP features are eliminated by employing the error histogram of a shallow (n/p/n) autoencoder to quantify the retrieval relevance of image blocks. If the region is relevant, the autoencoder gives large error for its decoding. Hence, via examining the autoencoder error of image blocks, irrelevant regions can be detected and eliminated. In order to calculate similarity within general classes, the distance between the LBP features of relevant regions is calculated. The results show that the retrieval time can be reduced, and the storage requirements can be lowered without significant decrease in accuracy
    corecore