12,840 research outputs found

    Multi-View Face Recognition From Single RGBD Models of the Faces

    Get PDF
    This work takes important steps towards solving the following problem of current interest: Assuming that each individual in a population can be modeled by a single frontal RGBD face image, is it possible to carry out face recognition for such a population using multiple 2D images captured from arbitrary viewpoints? Although the general problem as stated above is extremely challenging, it encompasses subproblems that can be addressed today. The subproblems addressed in this work relate to: (1) Generating a large set of viewpoint dependent face images from a single RGBD frontal image for each individual; (2) using hierarchical approaches based on view-partitioned subspaces to represent the training data; and (3) based on these hierarchical approaches, using a weighted voting algorithm to integrate the evidence collected from multiple images of the same face as recorded from different viewpoints. We evaluate our methods on three datasets: a dataset of 10 people that we created and two publicly available datasets which include a total of 48 people. In addition to providing important insights into the nature of this problem, our results show that we are able to successfully recognize faces with accuracies of 95% or higher, outperforming existing state-of-the-art face recognition approaches based on deep convolutional neural networks

    Three-Dimensional Spectral-Domain Optical Coherence Tomography Data Analysis for Glaucoma Detection

    Get PDF
    Purpose: To develop a new three-dimensional (3D) spectral-domain optical coherence tomography (SD-OCT) data analysis method using a machine learning technique based on variable-size super pixel segmentation that efficiently utilizes full 3D dataset to improve the discrimination between early glaucomatous and healthy eyes. Methods: 192 eyes of 96 subjects (44 healthy, 59 glaucoma suspect and 89 glaucomatous eyes) were scanned with SD-OCT. Each SD-OCT cube dataset was first converted into 2D feature map based on retinal nerve fiber layer (RNFL) segmentation and then divided into various number of super pixels. Unlike the conventional super pixel having a fixed number of points, this newly developed variable-size super pixel is defined as a cluster of homogeneous adjacent pixels with variable size, shape and number. Features of super pixel map were extracted and used as inputs to machine classifier (LogitBoost adaptive boosting) to automatically identify diseased eyes. For discriminating performance assessment, area under the curve (AUC) of the receiver operating characteristics of the machine classifier outputs were compared with the conventional circumpapillary RNFL (cpRNFL) thickness measurements. Results: The super pixel analysis showed statistically significantly higher AUC than the cpRNFL (0.855 vs. 0.707, respectively, p = 0.031, Jackknife test) when glaucoma suspects were discriminated from healthy, while no significant difference was found when confirmed glaucoma eyes were discriminated from healthy eyes. Conclusions: A novel 3D OCT analysis technique performed at least as well as the cpRNFL in glaucoma discrimination and even better at glaucoma suspect discrimination. This new method has the potential to improve early detection of glaucomatous damage. © 2013 Xu et al

    Skin lesion classification from dermoscopic images using deep learning techniques

    Get PDF
    The recent emergence of deep learning methods for medical image analysis has enabled the development of intelligent medical imaging-based diagnosis systems that can assist the human expert in making better decisions about a patient’s health. In this paper we focus on the problem of skin lesion classification, particularly early melanoma detection, and present a deep-learning based approach to solve the problem of classifying a dermoscopic image containing a skin lesion as malignant or benign. The proposed solution is built around the VGGNet convolutional neural network architecture and uses the transfer learning paradigm. Experimental results are encouraging: on the ISIC Archive dataset, the proposed method achieves a sensitivity value of 78.66%, which is significantly higher than the current state of the art on that dataset.Postprint (author's final draft
    • …
    corecore