2,111 research outputs found

    Analysing and processing medical images with increased performance using fractal geometry

    Get PDF
    The research relied on the application of a series of steps to analyze medical images, and to basically achieve this goal, a set of techniques were made from both fractal engineering and tissue analysis by improving the studied image and then analyzing the studied image texture in the fractal dimension and propose a hybrid method for segmenting images of complex situations and structures based on the geometric patterns that are repeated and represented by the fractal filter (Hurst), which is one of the modern techniques used in the field of digital image processing. Using fractal methods, that is, a specific application through real fractal structures of medical images and measuring their fractal dimensions and in capturing the exact features based on the scale in dimensional fractions, where the accuracy rate reached )98%( in diagnosing pathological conditions with an error rate close to zero. Also, the coefficients of multiple fractals were calculated (α) ,with a threshold factor of (4.5), the texture is also classified based on the fractal algorithm and Gray-Level Co-Occurrence Matrices (GLCM) and according to the experimental results performed on the medical images, the classification method provides a classification rate of 95%. To increase the accuracy, the lacunarity was calculated in the healthy medical images by applying fractal theorem filters where the gap ratio was close to (1) in the lacunarity size. The results also showed that the decrease in the contrast of the image with the continuation of the smoothing process or the decrease in the intensity levels of the image causes a significant decrease in the contrast of the image, especially in the areas of the edges

    Computer Vision for Timber Harvesting

    Get PDF

    Distributed video through telecommunication networks using fractal image compression techniques

    Get PDF
    The research presented in this thesis investigates the use of fractal compression techniques for a real time video distribution system. The motivation for this work was that the method has some useful properties which satisfy many requirements for video compression. In addition, as a novel technique, the fractal compression method has a great potential. In this thesis, we initially develop an understanding of the state of the art in image and video compression and describe the mathematical concepts and basic terminology of the fractal compression algorithm. Several schemes which aim to the improve of the algorithm, for still images are then examined. Amongst these, two novel contributions are described. The first is the partitioning of the image into sections which resulted insignificant reduction of the compression time. In the second, the use of the median metric as alternative to the RMS was considered but was not finally adopted, since the RMS proved to be a more efficient measure. The extension of the fractal compression algorithm from still images to image sequences is then examined and three different schemes to reduce the temporal redundancy of the video compression algorithm are described. The reduction in the execution time of the compression algorithm that can be obtained by the techniques described is significant although real time execution has not yet been achieved. Finally, the basic concepts of distributed programming and networks, as basic elements of a video distribution system, are presented and the hardware and software components of a fractal video distribution system are described. The implementation of the fractal compression algorithm on a TMS320C40 is also considered for speed benefits and it is found that a relatively large number of processors are needed for real time execution

    Super-Resolution Reconstruction of Remote Sensing Images Using Multifractal Analysis

    Get PDF
    Satellite remote sensing (RS) is an important contributor to Earth observation, providing various kinds of imagery every day, but low spatial resolution remains a critical bottleneck in a lot of applications, restricting higher spatial resolution analysis (e.g., intra-urban). In this study, a multifractal-based super-resolution reconstruction method is proposed to alleviate this problem. The multifractal characteristic is common in Nature. The self-similarity or self-affinity presented in the image is useful to estimate details at larger and smaller scales than the original. We first look for the presence of multifractal characteristics in the images. Then we estimate parameters of the information transfer function and noise of the low resolution image. Finally, a noise-free, spatial resolution-enhanced image is generated by a fractal coding-based denoising and downscaling method. The empirical case shows that the reconstructed super-resolution image performs well in detail enhancement. This method is not only useful for remote sensing in investigating Earth, but also for other images with multifractal characteristics

    Analysis of Image Compression Methods Based On Transform and Fractal Coding

    Get PDF
    Image compression is process to remove the redundant information from the image so that only essential information can be stored to reduce the storage size, transmission bandwidth and transmission time. The essential information is extracted by various transforms techniques such that it can be reconstructed without losing quality and information of the image. In this thesis work comparative analysis of image compression is done by four transform method, which are Discrete Cosine Transform (DCT), Discrete Wavelet Transform( DWT) & Hybrid (DCT+DWT) Transform and fractal coding. MATLAB programs were written for each of the above method and concluded based on the results obtained that hybrid DWT-DCT algorithm performs much better than the standalone JPEG-based DCT, DWT algorithms in terms of peak signal to noise ratio (PSNR), as well as visual perception at higher compression ratio. The popular JPEG standard is widely used in digital cameras and web ¨Cbased image delivery. The wavelet transform, which is part of the new JPEG 2000 standard, claims to minimize some of the visually distracting artifacts that can appear in JPEG images. For one thing, it uses much larger blocks- selectable, but typically1024 x 1024 pixels ¨C for compression, rather than the 8 X 8 pixel blocks used in the original JPEG method, which often produced visible boundaries. Fractal compression has also shown promise and claims to be able to enlarge images by inserting ¨Drealistic¡¬ detail beyond the resolution limit of the original. Each method is discussed in the thesis

    Digital image compression

    Get PDF

    Biometric Systems

    Get PDF
    Biometric authentication has been widely used for access control and security systems over the past few years. The purpose of this book is to provide the readers with life cycle of different biometric authentication systems from their design and development to qualification and final application. The major systems discussed in this book include fingerprint identification, face recognition, iris segmentation and classification, signature verification and other miscellaneous systems which describe management policies of biometrics, reliability measures, pressure based typing and signature verification, bio-chemical systems and behavioral characteristics. In summary, this book provides the students and the researchers with different approaches to develop biometric authentication systems and at the same time includes state-of-the-art approaches in their design and development. The approaches have been thoroughly tested on standard databases and in real world applications

    Fractal Analysis

    Get PDF
    Fractal analysis is becoming more and more common in all walks of life. This includes biomedical engineering, steganography and art. Writing one book on all these topics is a very difficult task. For this reason, this book covers only selected topics. Interested readers will find in this book the topics of image compression, groundwater quality, establishing the downscaling and spatio-temporal scale conversion models of NDVI, modelling and optimization of 3T fractional nonlinear generalized magneto-thermoelastic multi-material, algebraic fractals in steganography, strain induced microstructures in metals and much more. The book will definitely be of interest to scientists dealing with fractal analysis, as well as biomedical engineers or IT engineers. I encourage you to view individual chapters

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques

    The Space and Earth Science Data Compression Workshop

    Get PDF
    This document is the proceedings from a Space and Earth Science Data Compression Workshop, which was held on March 27, 1992, at the Snowbird Conference Center in Snowbird, Utah. This workshop was held in conjunction with the 1992 Data Compression Conference (DCC '92), which was held at the same location, March 24-26, 1992. The workshop explored opportunities for data compression to enhance the collection and analysis of space and Earth science data. The workshop consisted of eleven papers presented in four sessions. These papers describe research that is integrated into, or has the potential of being integrated into, a particular space and/or Earth science data information system. Presenters were encouraged to take into account the scientists's data requirements, and the constraints imposed by the data collection, transmission, distribution, and archival system
    corecore