383 research outputs found

    BEMDEC: An Adaptive and Robust Methodology for Digital Image Feature Extraction

    Get PDF
    The intriguing study of feature extraction, and edge detection in particular, has, as a result of the increased use of imagery, drawn even more attention not just from the field of computer science but also from a variety of scientific fields. However, various challenges surrounding the formulation of feature extraction operator, particularly of edges, which is capable of satisfying the necessary properties of low probability of error (i.e., failure of marking true edges), accuracy, and consistent response to a single edge, continue to persist. Moreover, it should be pointed out that most of the work in the area of feature extraction has been focused on improving many of the existing approaches rather than devising or adopting new ones. In the image processing subfield, where the needs constantly change, we must equally change the way we think. In this digital world where the use of images, for variety of purposes, continues to increase, researchers, if they are serious about addressing the aforementioned limitations, must be able to think outside the box and step away from the usual in order to overcome these challenges. In this dissertation, we propose an adaptive and robust, yet simple, digital image features detection methodology using bidimensional empirical mode decomposition (BEMD), a sifting process that decomposes a signal into its two-dimensional (2D) bidimensional intrinsic mode functions (BIMFs). The method is further extended to detect corners and curves, and as such, dubbed as BEMDEC, indicating its ability to detect edges, corners and curves. In addition to the application of BEMD, a unique combination of a flexible envelope estimation algorithm, stopping criteria and boundary adjustment made the realization of this multi-feature detector possible. Further application of two morphological operators of binarization and thinning adds to the quality of the operator

    Facial Emotion Recognition Based on Empirical Mode Decomposition and Discrete Wavelet Transform Analysis

    Get PDF
    This paper presents a new framework of using empirical mode decomposition (EMD) and discrete wavelet transform (DWT) with an application for facial emotion recognition. EMD is a multi-resolution technique used to decompose any complicated signal into a small set of intrinsic mode functions (IMFs) based on sifting process. In this framework, the EMD was applied on facial images to extract the informative features by decomposing the image into a set of IMFs and residue. The selected IMFs was then subjected to DWT in which it decomposes the instantaneous frequency of the IMFs into four sub band. The approximate coefficients (cA1) at first level decomposition are extracted and used as significant features to recognize the facial emotion. Since there are a large number of coefficients, hence the principal component analysis (PCA) is applied to the extracted features. The k-nearest neighbor classifier is adopted as a classifier to classify seven facial emotions (anger, disgust, fear, happiness, neutral, sadness and surprise). To evaluate the effectiveness of the proposed method, the JAFFE database has been employed. Based on the results obtained, the proposed method demonstrates the recognition rate of 80.28%, thus it is converging

    Data-driven time-frequency analysis of multivariate data

    No full text
    Empirical Mode Decomposition (EMD) is a data-driven method for the decomposition and time-frequency analysis of real world nonstationary signals. Its main advantages over other time-frequency methods are its locality, data-driven nature, multiresolution-based decomposition, higher time-frequency resolution and its ability to capture oscillation of any type (nonharmonic signals). These properties have made EMD a viable tool for real world nonstationary data analysis. Recent advances in sensor and data acquisition technologies have brought to light new classes of signals containing typically several data channels. Currently, such signals are almost invariably processed channel-wise, which is suboptimal. It is, therefore, imperative to design multivariate extensions of the existing nonlinear and nonstationary analysis algorithms as they are expected to give more insight into the dynamics and the interdependence between multiple channels of such signals. To this end, this thesis presents multivariate extensions of the empirical mode de- composition algorithm and illustrates their advantages with regards to multivariate non- stationary data analysis. Some important properties of such extensions are also explored, including their ability to exhibit wavelet-like dyadic filter bank structures for white Gaussian noise (WGN), and their capacity to align similar oscillatory modes from multiple data channels. Owing to the generality of the proposed methods, an improved multi- variate EMD-based algorithm is introduced which solves some inherent problems in the original EMD algorithm. Finally, to demonstrate the potential of the proposed methods, simulations on the fusion of multiple real world signals (wind, images and inertial body motion data) support the analysis

    Detection of pathologies in retina digital images an empirical mode decomposition approach

    Get PDF
    Accurate automatic detection of pathologies in retina digital images offers a promising approach in clinicalapplications. This thesis employs the discrete wavelet transform (DWT) and empirical mode decomposition (EMD) to extract six statistical textural features from retina digital images. The statistical features are the mean, standard deviation, smoothness, third moment, uniformity, and entropy. The purpose is to classify normal and abnormal images. Five different pathologies are considered. They are Artery sheath (Coat’s disease), blot hemorrhage, retinal degeneration (circinates), age-related macular degeneration (drusens), and diabetic retinopathy (microaneurysms and exudates). Four classifiers are employed; including support vector machines (SVM), quadratic discriminant analysis (QDA), k-nearest neighbor algorithm (k-NN), and probabilistic neural networks (PNN). For each experiment, ten random folds are generated to perform cross-validation tests. In order to assess the performance of the classifiers, the average and standard deviation of the correct recognition rate, sensitivity and specificity are computed for each simulation. The experimental results highlight two main conclusions. First, they show the outstanding performance of EMD over DWT with all classifiers. Second, they demonstrate the superiority of the SVM classifier over QDA, k-NN, and PNN. Finally, principal component analysis (PCA) was employed to reduce the number of features in hope to improve the accuracy of classifiers. We find that there is no general and significant improvement of the performance, however. In sum, the EMD-SVM system provides a promising approach for the detection of pathologies in digital retina

    Detail Enhancing Denoising of Digitized 3D Models from a Mobile Scanning System

    Get PDF
    The acquisition process of digitizing a large-scale environment produces an enormous amount of raw geometry data. This data is corrupted by system noise, which leads to 3D surfaces that are not smooth and details that are distorted. Any scanning system has noise associate with the scanning hardware, both digital quantization errors and measurement inaccuracies, but a mobile scanning system has additional system noise introduced by the pose estimation of the hardware during data acquisition. The combined system noise generates data that is not handled well by existing noise reduction and smoothing techniques. This research is focused on enhancing the 3D models acquired by mobile scanning systems used to digitize large-scale environments. These digitization systems combine a variety of sensors – including laser range scanners, video cameras, and pose estimation hardware – on a mobile platform for the quick acquisition of 3D models of real world environments. The data acquired by such systems are extremely noisy, often with significant details being on the same order of magnitude as the system noise. By utilizing a unique 3D signal analysis tool, a denoising algorithm was developed that identifies regions of detail and enhances their geometry, while removing the effects of noise on the overall model. The developed algorithm can be useful for a variety of digitized 3D models, not just those involving mobile scanning systems. The challenges faced in this study were the automatic processing needs of the enhancement algorithm, and the need to fill a hole in the area of 3D model analysis in order to reduce the effect of system noise on the 3D models. In this context, our main contributions are the automation and integration of a data enhancement method not well known to the computer vision community, and the development of a novel 3D signal decomposition and analysis tool. The new technologies featured in this document are intuitive extensions of existing methods to new dimensionality and applications. The totality of the research has been applied towards detail enhancing denoising of scanned data from a mobile range scanning system, and results from both synthetic and real models are presented

    Region-Based Image-Fusion Framework for Compressive Imaging

    Get PDF
    A novel region-based image-fusion framework for compressive imaging (CI) and its implementation scheme are proposed. Unlike previous works on conventional image fusion, we consider both compression capability on sensor side and intelligent understanding of the image contents in the image fusion. Firstly, the compressed sensing theory and normalized cut theory are introduced. Then region-based image-fusion framework for compressive imaging is proposed and its corresponding fusion scheme is constructed. Experiment results demonstrate that the proposed scheme delivers superior performance over traditional compressive image-fusion schemes in terms of both object metrics and visual quality

    Profile monitoring via sensor fusion: The use of PCA methods for multi-channel data

    Get PDF
    Continuous advances of sensor technology and real-time computational capability are leading to data-rich environments to improve industrial automation and machine intelligence. When multiple signals are acquired from different sources (i.e. multi-channel signal data), two main issues must be faced: (i) the reduction of data dimensionality to make the overall signal analysis system efficient and actually applicable in industrial environments, and (ii) the fusion of all the sensor outputs to achieve a better comprehension of the process. In this frame, multi-way principal component analysis (PCA) represents a multivariate technique to perform both the tasks. The paper investigates two main multi-way extensions of the traditional PCA to deal with multi-channel signals, one based on unfolding the original data-set, and one based on multi-linear analysis of data in their tensorial form. The approaches proposed for data modelling are combined with appropriate control charting to achieve multi-channel profile data monitoring. The developed methodologies are demonstrated with both simulated and real data. The real data come from an industrial sensor fusion application in waterjet cutting, where different signals are monitored to detect faults affecting the most critical machine components
    • …
    corecore