2,288 research outputs found

    Automatic epilepsy detection using fractal dimensions segmentation and GP-SVM classification

    Get PDF
    Objective: The most important part of signal processing for classification is feature extraction as a mapping from original input electroencephalographic (EEG) data space to new features space with the biggest class separability value. Features are not only the most important, but also the most difficult task from the classification process as they define input data and classification quality. An ideal set of features would make the classification problem trivial. This article presents novel methods of feature extraction processing and automatic epilepsy seizure classification combining machine learning methods with genetic evolution algorithms. Methods: Classification is performed on EEG data that represent electric brain activity. At first, the signal is preprocessed with digital filtration and adaptive segmentation using fractal dimensions as the only segmentation measure. In the next step, a novel method using genetic programming (GP) combined with support vector machine (SVM) confusion matrix as fitness function weight is used to extract feature vectors compressed into lower dimension space and classify the final result into ictal or interictal epochs. Results: The final application of GP SVM method improves the discriminatory performance of a classifier by reducing feature dimensionality at the same time. Members of the GP tree structure represent the features themselves and their number is automatically decided by the compression function introduced in this paper. This novel method improves the overall performance of the SVM classification by dramatically reducing the size of input feature vector. Conclusion: According to results, the accuracy of this algorithm is very high and comparable, or even superior to other automatic detection algorithms. In combination with the great efficiency, this algorithm can be used in real-time epilepsy detection applications. From the results of the algorithm's classification, we can observe high sensitivity, specificity results, except for the Generalized Tonic Clonic Seizure (GTCS). As the next step, the optimization of the compression stage and final SVM evaluation stage is in place. More data need to be obtained on GTCS to improve the overall classification score for GTCS.Web of Science142449243

    A survey of parallel algorithms for fractal image compression

    Get PDF
    This paper presents a short survey of the key research work that has been undertaken in the application of parallel algorithms for Fractal image compression. The interest in fractal image compression techniques stems from their ability to achieve high compression ratios whilst maintaining a very high quality in the reconstructed image. The main drawback of this compression method is the very high computational cost that is associated with the encoding phase. Consequently, there has been significant interest in exploiting parallel computing architectures in order to speed up this phase, whilst still maintaining the advantageous features of the approach. This paper presents a brief introduction to fractal image compression, including the iterated function system theory upon which it is based, and then reviews the different techniques that have been, and can be, applied in order to parallelize the compression algorithm

    Fast Search Approaches for Fractal Image Coding: Review of Contemporary Literature

    Get PDF
    Fractal Image Compression FIC as a model was conceptualized in the 1989 In furtherance there are numerous models that has been developed in the process Existence of fractals were initially observed and depicted in the Iterated Function System IFS and the IFS solutions were used for encoding images The process of IFS pertaining to any image constitutes much lesser space for recording than the actual image which has led to the development of representation the image using IFS form and how the image compression systems has taken shape It is very important that the time consumed for encoding has to be addressed for achieving optimal compression conditions and predominantly the inputs that are shared in the solutions proposed in the study depict the fact that despite of certain developments that has taken place still there are potential chances of scope for improvement From the review of exhaustive range of models that are depicted in the model it is evident that over period of time numerous advancements have taken place in the FCI model and is adapted at image compression in varied levels This study focus on the existing range of literature on FCI and the insights of various models has been depicted in this stud

    A Review on Block Matching Motion Estimation and Automata Theory based Approaches for Fractal Coding

    Get PDF
    Fractal compression is the lossy compression technique in the field of gray/color image and video compression. It gives high compression ratio, better image quality with fast decoding time but improvement in encoding time is a challenge. This review paper/article presents the analysis of most significant existing approaches in the field of fractal based gray/color images and video compression, different block matching motion estimation approaches for finding out the motion vectors in a frame based on inter-frame coding and intra-frame coding i.e. individual frame coding and automata theory based coding approaches to represent an image/sequence of images. Though different review papers exist related to fractal coding, this paper is different in many sense. One can develop the new shape pattern for motion estimation and modify the existing block matching motion estimation with automata coding to explore the fractal compression technique with specific focus on reducing the encoding time and achieving better image/video reconstruction quality. This paper is useful for the beginners in the domain of video compression

    A Fast Fractal Image Compression Algorithm Combined with Graphic Processor Unit

    Get PDF
    Directed against the characteristics of computational intensity of fractal image compression encoding, a serial-parallel transfer mechanism is built for encoding procedures. By utilizing the properties of single instruction and multithreading execution of compute unified device architecture (CUDA), the parallel computational model of fractal encoding is built on the graphic processor unit(GPU) in order to parallelize the considerably time-consuming serial execution process of searching for the block of best match. The experimental result indicates, the algorithm in this paper shortens the encoding time to the millisecond scale and significantly boosts the execution efficiency of fractal image encoding algorithm while keeping the decoded image in good quality

    Towards the text compression based feature extraction in high impedance fault detection

    Get PDF
    High impedance faults of medium voltage overhead lines with covered conductors can be identified by the presence of partial discharges. Despite it is a subject of research for more than 60 years, online partial discharges detection is always a challenge, especially in environment with heavy background noise. In this paper, a new approach for partial discharge pattern recognition is presented. All results were obtained on data, acquired from real 22 kV medium voltage overhead power line with covered conductors. The proposed method is based on a text compression algorithm and it serves as a signal similarity estimation, applied for the first time on partial discharge pattern. Its relevancy is examined by three different variations of classification model. The improvement gained on an already deployed model proves its quality.Web of Science1211art. no. 214

    Analysing and processing medical images with increased performance using fractal geometry

    Get PDF
    The research relied on the application of a series of steps to analyze medical images, and to basically achieve this goal, a set of techniques were made from both fractal engineering and tissue analysis by improving the studied image and then analyzing the studied image texture in the fractal dimension and propose a hybrid method for segmenting images of complex situations and structures based on the geometric patterns that are repeated and represented by the fractal filter (Hurst), which is one of the modern techniques used in the field of digital image processing. Using fractal methods, that is, a specific application through real fractal structures of medical images and measuring their fractal dimensions and in capturing the exact features based on the scale in dimensional fractions, where the accuracy rate reached )98%( in diagnosing pathological conditions with an error rate close to zero. Also, the coefficients of multiple fractals were calculated (α) ,with a threshold factor of (4.5), the texture is also classified based on the fractal algorithm and Gray-Level Co-Occurrence Matrices (GLCM) and according to the experimental results performed on the medical images, the classification method provides a classification rate of 95%. To increase the accuracy, the lacunarity was calculated in the healthy medical images by applying fractal theorem filters where the gap ratio was close to (1) in the lacunarity size. The results also showed that the decrease in the contrast of the image with the continuation of the smoothing process or the decrease in the intensity levels of the image causes a significant decrease in the contrast of the image, especially in the areas of the edges

    The Design and Implementation of an Image Segmentation System for Forest Image Analysis

    Get PDF
    The United States Forest Service (USFS) is developing software systems to evaluate forest resources with respect to qualities such as scenic beauty and vegetation structure. Such evaluations usually involve a large amount of human labor. In this thesis, I will discuss the design and implementation of a digital image segmentation system, and how to apply it to analyze forest images so that automated forest resource evaluation can be achieved. The first major contribution of the thesis is the evaluation of various feature design schemes for segmenting forest images. The other major contribution of this thesis is the development of a pattern recognition-based image segmentation algorithm. The best system performance was a 61.4% block classification error rate, achieved by combining color histograms with entropy. This performance is better than that obtained by an ?intelligent? guess based on prior knowledge about the categories under study, which is 68.0%
    corecore