361 research outputs found
Pigment Melanin: Pattern for Iris Recognition
Recognition of iris based on Visible Light (VL) imaging is a difficult
problem because of the light reflection from the cornea. Nonetheless, pigment
melanin provides a rich feature source in VL, unavailable in Near-Infrared
(NIR) imaging. This is due to biological spectroscopy of eumelanin, a chemical
not stimulated in NIR. In this case, a plausible solution to observe such
patterns may be provided by an adaptive procedure using a variational technique
on the image histogram. To describe the patterns, a shape analysis method is
used to derive feature-code for each subject. An important question is how much
the melanin patterns, extracted from VL, are independent of iris texture in
NIR. With this question in mind, the present investigation proposes fusion of
features extracted from NIR and VL to boost the recognition performance. We
have collected our own database (UTIRIS) consisting of both NIR and VL images
of 158 eyes of 79 individuals. This investigation demonstrates that the
proposed algorithm is highly sensitive to the patterns of cromophores and
improves the iris recognition rate.Comment: To be Published on Special Issue on Biometrics, IEEE Transaction on
Instruments and Measurements, Volume 59, Issue number 4, April 201
Sparse Modeling for Image and Vision Processing
In recent years, a large amount of multi-disciplinary research has been
conducted on sparse models and their applications. In statistics and machine
learning, the sparsity principle is used to perform model selection---that is,
automatically selecting a simple model among a large collection of them. In
signal processing, sparse coding consists of representing data with linear
combinations of a few dictionary elements. Subsequently, the corresponding
tools have been widely adopted by several scientific communities such as
neuroscience, bioinformatics, or computer vision. The goal of this monograph is
to offer a self-contained view of sparse modeling for visual recognition and
image processing. More specifically, we focus on applications where the
dictionary is learned and adapted to data, yielding a compact representation
that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics
and Visio
A Novel Convolutional Neural Network Based on Combined Features from Different Transformations for Brain Tumor Diagnosis
Brain tumors are a leading cause of death worldwide. With the advancements in medicine and deep learning technologies, the dependency on manual classification-based diagnosis drives down owing to their inaccurate diagnosis and prognosis. Accordingly, the proposed model provides an accurate multi-class classification model for brain tumor using the convolution neural network (CNN) as a backbone. Our novel model is based on concatenating the extracted features from the proposed three branches of CNN, where each branch is fed by the output of different transform domains of the original magnetic resonance image (MRI). These transformations include Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), and the time-domain of the original image. Then, the CNN is employed followed by a concatenation layer, flatten laver, and dense layer, before using the SoftMax layer. The proposed model was applied to the Figshare dataset of brain tumor which consists of three classes pituitary, glioma, and meningioma. The results proved the advantage of the proposed system which achieved a high mean performance over 5-fold cross-validation with 98.89% accuracy, 98.78% F1-score, 98.74% precision, 98.82% recall, and 99.44% specificity. The comparative study with well-known models, as well as the pre-trained CNN models, established the potential of the proposed model. This novel approach has the potential to significantly improve brain tumor classification accuracy. It enables a more comprehensive and objective analysis of brain tumors, leading to improved treatment decisions and better patient care
A Novel Convolutional Neural Network Based on Combined Features from Different Transformations for Brain Tumor Diagnosis
Brain tumors are a leading cause of death worldwide. With the advancements in medicine and deep learning technologies, the dependency on manual classification-based diagnosis drives down owing to their inaccurate diagnosis and prognosis. Accordingly, the proposed model provides an accurate multi-class classification model for brain tumor using the convolution neural network (CNN) as a backbone. Our novel model is based on concatenating the extracted features from the proposed three branches of CNN, where each branch is fed by the output of different transform domains of the original magnetic resonance image (MRI). These transformations include Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), and the time-domain of the original image. Then, the CNN is employed followed by a concatenation layer, flatten laver, and dense layer, before using the SoftMax layer. The proposed model was applied to the Figshare dataset of brain tumor which consists of three classes pituitary, glioma, and meningioma. The results proved the advantage of the proposed system which achieved a high mean performance over 5-fold cross-validation with 98.89% accuracy, 98.78% F1-score, 98.74% precision, 98.82% recall, and 99.44% specificity. The comparative study with well-known models, as well as the pre-trained CNN models, established the potential of the proposed model. This novel approach has the potential to significantly improve brain tumor classification accuracy. It enables a more comprehensive and objective analysis of brain tumors, leading to improved treatment decisions and better patient care
- …