361 research outputs found

    Automated Glaucoma Detection Using Hybrid Feature Extraction in Retinal Fundus Images

    Get PDF
    Glaucoma is one of the most common causes of blindness. Robust mass screening may help to extend the symptom-free life for affected patients. To realize mass screening requires a cost-effective glaucoma detection method which integrates well with digital medical and administrative processes. To address these requirements, we propose a novel low cost automated glaucoma diagnosis system based on hybrid feature extraction from digital fundus images. The paper discusses a system for the automated identification of normal and glaucoma classes using higher order spectra (HOS), trace transform (TT), and discrete wavelet transform (DWT) features. The extracted features are fed to a support vector machine (SVM) classifier with linear, polynomial order 1, 2, 3 and radial basis function (RBF) in order to select the best kernel for automated decision making. In this work, the SVM classifier, with a polynomial order 2 kernel function, was able to identify glaucoma and normal images with an accuracy of 91.67%, and sensitivity and specificity of 90% and 93.33%, respectively. Furthermore, we propose a novel integrated index called Glaucoma Risk Index (GRI) which is composed from HOS, TT, and DWT features, to diagnose the unknown class using a single feature. We hope that this GRI will aid clinicians to make a faster glaucoma diagnosis during the mass screening of normal/glaucoma images

    Surface representations for 3D face recognition

    Get PDF

    Model-Based Clustering and Classification of Functional Data

    Full text link
    The problem of complex data analysis is a central topic of modern statistical science and learning systems and is becoming of broader interest with the increasing prevalence of high-dimensional data. The challenge is to develop statistical models and autonomous algorithms that are able to acquire knowledge from raw data for exploratory analysis, which can be achieved through clustering techniques or to make predictions of future data via classification (i.e., discriminant analysis) techniques. Latent data models, including mixture model-based approaches are one of the most popular and successful approaches in both the unsupervised context (i.e., clustering) and the supervised one (i.e, classification or discrimination). Although traditionally tools of multivariate analysis, they are growing in popularity when considered in the framework of functional data analysis (FDA). FDA is the data analysis paradigm in which the individual data units are functions (e.g., curves, surfaces), rather than simple vectors. In many areas of application, the analyzed data are indeed often available in the form of discretized values of functions or curves (e.g., time series, waveforms) and surfaces (e.g., 2d-images, spatio-temporal data). This functional aspect of the data adds additional difficulties compared to the case of a classical multivariate (non-functional) data analysis. We review and present approaches for model-based clustering and classification of functional data. We derive well-established statistical models along with efficient algorithmic tools to address problems regarding the clustering and the classification of these high-dimensional data, including their heterogeneity, missing information, and dynamical hidden structure. The presented models and algorithms are illustrated on real-world functional data analysis problems from several application area

    A Wavelet-Based Approach to Pattern Discovery in Melodies

    Get PDF

    Multi Criteria Mapping Based on SVM and Clustering Methods

    Get PDF
    There are many more ways to automate the application process like using some commercial software’s that are used in big organizations to scan bills and forms, but this application is only for the static frames or formats. In our application, we are trying to automate the non-static frames as the study certificate we get are from different counties with different universities. Each and every university have there one format of certificates, so we try developing a very new application that can commonly work for all the frames or formats. As we observe many applicants are from same university which have a common format of the certificate, if we implement this type of tools, then we can analyze this sort of certificates in a simple way within very less time. To make this process more accurate we try implementing SVM and Clustering methods. With these methods we can accurately map courses in certificates to ASE study path if not to exclude list. A grade calculation is done for courses which are mapped to an ASE list by separating the data for both labs and courses in it. At the end, we try to award some points, which includes points from ASE related courses, work experience, specialization certificates and German language skills. Finally, these points are provided to the chair to select the applicant for master course ASE

    A Panorama on Multiscale Geometric Representations, Intertwining Spatial, Directional and Frequency Selectivity

    Full text link
    The richness of natural images makes the quest for optimal representations in image processing and computer vision challenging. The latter observation has not prevented the design of image representations, which trade off between efficiency and complexity, while achieving accurate rendering of smooth regions as well as reproducing faithful contours and textures. The most recent ones, proposed in the past decade, share an hybrid heritage highlighting the multiscale and oriented nature of edges and patterns in images. This paper presents a panorama of the aforementioned literature on decompositions in multiscale, multi-orientation bases or dictionaries. They typically exhibit redundancy to improve sparsity in the transformed domain and sometimes its invariance with respect to simple geometric deformations (translation, rotation). Oriented multiscale dictionaries extend traditional wavelet processing and may offer rotation invariance. Highly redundant dictionaries require specific algorithms to simplify the search for an efficient (sparse) representation. We also discuss the extension of multiscale geometric decompositions to non-Euclidean domains such as the sphere or arbitrary meshed surfaces. The etymology of panorama suggests an overview, based on a choice of partially overlapping "pictures". We hope that this paper will contribute to the appreciation and apprehension of a stream of current research directions in image understanding.Comment: 65 pages, 33 figures, 303 reference
    corecore