8 research outputs found
Review on the methods of automatic liver segmentation from abdominal images
Automatic liver segmentation from abdominal images is challenging on the aspects of segmentation accuracy, automation and robustness. There exist many methods of liver segmentation and ways of categorisingthem. In this paper, we present a new way of summarizing the latest achievements in automatic liver segmentation.We categorise a segmentation method according to the image feature it works on, therefore better summarising the performance of each category and leading to finding an optimal solution for a particular segmentation task. All the methods of liver segmentation are categorized into three main classes including gray level based method, structure based method and texture based method. In each class, the latest advance is reviewed with summary comments on the advantages and drawbacks of each discussed approach. Performance comparisons among the classes are given along with the remarks on the problems existed and possible solutions. In conclusion, we point out that liver segmentation is still an open issue and the tendency is that multiple methods will be employed to-gether to achieve better segmentation performance
Optimización en GPU de algoritmos para la mejora del realce y segmentación en imágenes hepáticas
This doctoral thesis deepens the GPU acceleration for liver enhancement and segmentation. With this motivation, detailed research is carried out here in a compendium of articles. The work developed is structured in three scientific contributions, the first one is based upon enhancement and tumor segmentation, the second one explores the vessel segmentation and the last is published on liver segmentation. These works are implemented on GPU with significant speedups with great scientific impact and relevance in this doctoral thesis The first work proposes cross-modality based contrast enhancement for tumor segmentation on GPU. To do this, it takes target and guidance images as an input and enhance the low quality target image by applying two dimensional histogram approach. Further it has been observed that the enhanced image provides more accurate tumor segmentation using GPU based dynamic seeded region growing. The second contribution is about fast parallel gradient based seeded region growing where static approach has been proposed and implemented on GPU for accurate vessel segmentation. The third contribution describes GPU acceleration of Chan-Vese model and cross-modality based contrast enhancement for liver segmentation
High performance computing for 3D image segmentation
Digital image processing is a very popular and still very promising eld of science, which has been successfully applied to numerous areas and problems, reaching elds like forensic analysis, security systems, multimedia processing, aerospace, automotive, and many more.
A very important part of the image processing area is image segmentation. This refers to the task of partitioning a given image into multiple regions and is typically used to locate and mark objects and boundaries in input scenes. After segmentation the image represents a set of data far more suitable for further algorithmic processing and decision making. Image segmentation algorithms are a very broad eld and they have received signi cant amount of research interest A good example of an area, in which image processing plays a constantly growing role, is the eld of medical solutions. The expectations and demands that are presented in this branch of science are very high and dif cult to meet for the applied technology. The problems are challenging and the potential bene ts are signi cant and clearly visible. For over thirty years image processing has been applied to different problems and questions in medicine and the practitioners have exploited the rich possibilities that it offered. As a result, the eld of medicine has seen signi cant improvements in the interpretation of examined medical data. Clearly, the medical knowledge has also evolved signi cantly over these years, as well as the medical equipment that serves doctors and researchers. Also the common computer hardware, which is present at homes, of ces and laboratories, is constantly evolving and changing. All of these factors have sculptured the shape of modern image processing techniques and established in which ways it is currently used and developed. Modern medical image processing is centered around 3D images with high spatial and temporal resolution, which can bring a tremendous amount of data for medical practitioners. Processing of such large sets of data is not an easy task, requiring high computational power. Furthermore, in present times the computational power is not as easily available as in recent years, as the growth of possibilities of a single processing unit is very limited - a trend towards multi-unit processing and parallelization of the workload is clearly visible. Therefore, in order to continue the development of more complex and more advanced image processing techniques, a new direction is necessary.
A very interesting family of image segmentation algorithms, which has been gaining a lot of focus in the last three decades, is called Deformable Models. They are based on the concept of placing a geometrical object in the scene of interest and deforming it until it assumes the shape of objects of interest. This process is usually guided by several forces, which originate in mathematical functions, features of the input images and other constraints of the deformation process, like object curvature or continuity. A range of very desired features of Deformable Models include their high capability for customization and specialization for different tasks and also extensibility with various approaches for prior knowledge incorporation. This set of characteristics makes Deformable Models a very ef cient approach, which is capable of delivering results in competitive times and with very good quality of segmentation, robust to noisy and incomplete data.
However, despite the large amount of work carried out in this area, Deformable Models still suffer from a number of drawbacks. Those that have been gaining the most focus are e.g. sensitivity to the initial position and shape of the model, sensitivity to noise in the input images and to awed input data, or the need for user supervision over the process.
The work described in this thesis aims at addressing the problems of modern image segmentation, which has raised from the combination of above-mentioned factors: the signi cant growth of image volumes sizes, the growth of complexity of image processing algorithms, coupled with the change in processor development and turn towards multi-processing units instead of growing bus speeds and the number of operations per second of a single processing unit. We present our innovative model for 3D image segmentation, called the The Whole Mesh Deformation model, which holds a set of very desired features that successfully address the above-mentioned requirements. Our model has been designed speci cally for execution on parallel architectures and with the purpose of working well with very large 3D images that are created by modern medical acquisition devices.
Our solution is based on Deformable Models and is characterized by a very effective and precise segmentation capability. The proposed Whole Mesh Deformation (WMD) model uses a 3D mesh instead of a contour or a surface to represent the segmented shapes of interest, which allows exploiting more information in the image and obtaining results in shorter times. The model offers a very good ability for topology changes and allows effective parallelization of work ow, which makes it a very good choice for large data-sets. In this thesis we present a precise model description, followed by experiments on arti cial images and real medical data
Segmentation and Characterization of Small Retinal Vessels in Fundus Images Using the Tensor Voting Approach
RÉSUMÉ
La rétine permet de visualiser facilement une partie du réseau vasculaire humain. Elle offre
ainsi un aperçu direct sur le développement et le résultat de certaines maladies liées au réseau
vasculaire dans son entier. Chaque complication visible sur la rétine peut avoir un impact sur
la capacité visuelle du patient. Les plus petits vaisseaux sanguins sont parmi les premières
structures anatomiques affectées par la progression d’une maladie, être capable de les analyser
est donc crucial. Les changements dans l’état, l’aspect, la morphologie, la fonctionnalité, ou
même la croissance des petits vaisseaux indiquent la gravité des maladies.
Le diabète est une maladie métabolique qui affecte des millions de personnes autour
du monde. Cette maladie affecte le taux de glucose dans le sang et cause des changements
pathologiques dans différents organes du corps humain. La rétinopathie diabétique décrit l’en-
semble des conditions et conséquences du diabète au niveau de la rétine. Les petits vaisseaux
jouent un rôle dans le déclenchement, le développement et les conséquences de la rétinopa-
thie. Dans les dernières étapes de cette maladie, la croissance des nouveaux petits vaisseaux,
appelée néovascularisation, présente un risque important de provoquer la cécité. Il est donc
crucial de détecter tous les changements qui ont lieu dans les petits vaisseaux de la rétine
dans le but de caractériser les vaisseaux sains et les vaisseaux anormaux. La caractérisation
en elle-même peut faciliter la détection locale d’une rétinopathie spécifique.
La segmentation automatique des structures anatomiques comme le réseau vasculaire est
une étape cruciale. Ces informations peuvent être fournies à un médecin pour qu’elles soient
considérées lors de son diagnostic. Dans les systèmes automatiques d’aide au diagnostic, le
rôle des petits vaisseaux est significatif. Ne pas réussir à les détecter automatiquement peut
conduire à une sur-segmentation du taux de faux positifs des lésions rouges dans les étapes
ultérieures. Les efforts de recherche se sont concentrés jusqu’à présent sur la localisation
précise des vaisseaux de taille moyenne. Les modèles existants ont beaucoup plus de difficultés
à extraire les petits vaisseaux sanguins. Les modèles existants ne sont pas robustes à la grande
variance d’apparence des vaisseaux ainsi qu’à l’interférence avec l’arrière-plan. Les modèles de
la littérature existante supposent une forme générale qui n’est pas suffisante pour s’adapter
à la largeur étroite et la courbure qui caractérisent les petits vaisseaux sanguins. De plus, le
contraste avec l’arrière-plan dans les régions des petits vaisseaux est très faible. Les méthodes
de segmentation ou de suivi produisent des résultats fragmentés ou discontinus. Par ailleurs,
la segmentation des petits vaisseaux est généralement faite aux dépends de l’amplification
du bruit. Les modèles déformables sont inadéquats pour segmenter les petits vaisseaux. Les
forces utilisées ne sont pas assez flexibles pour compenser le faible contraste, la largeur, et
vii
la variance des vaisseaux. Enfin, les approches de type apprentissage machine nécessitent un
entraînement avec une base de données étiquetée. Il est très difficile d’obtenir ces bases de
données dans le cas des petits vaisseaux.
Cette thèse étend les travaux de recherche antérieurs en fournissant une nouvelle mé-
thode de segmentation des petits vaisseaux rétiniens. La détection de ligne à échelles multiples
(MSLD) est une méthode récente qui démontre une bonne performance de segmentation dans
les images de la rétine, tandis que le vote tensoriel est une méthode proposée pour reconnecter
les pixels. Une approche combinant un algorithme de détection de ligne et de vote tensoriel est
proposée. L’application des détecteurs de lignes a prouvé son efficacité à segmenter les vais-
seaux de tailles moyennes. De plus, les approches d’organisation perceptuelle comme le vote
tensoriel ont démontré une meilleure robustesse en combinant les informations voisines d’une
manière hiérarchique. La méthode de vote tensoriel est plus proche de la perception humain
que d’autres modèles standards. Comme démontré dans ce manuscrit, c’est un outil pour
segmenter les petits vaisseaux plus puissant que les méthodes existantes. Cette combinaison
spécifique nous permet de surmonter les défis de fragmentation éprouvés par les méthodes de
type modèle déformable au niveau des petits vaisseaux. Nous proposons également d’utiliser
un seuil adaptatif sur la réponse de l’algorithme de détection de ligne pour être plus robuste
aux images non-uniformes. Nous illustrons Ă©galement comment une combinaison des deux
méthodes individuelles, à plusieurs échelles, est capable de reconnecter les vaisseaux sur des
distances variables. Un algorithme de reconstruction des vaisseaux est également proposé.
Cette dernière étape est nécessaire car l’information géométrique complète est requise pour
pouvoir utiliser la segmentation dans un système d’aide au diagnostic.
La segmentation a été validée sur une base de données d’images de fond d’oeil à haute
résolution. Cette base contient des images manifestant une rétinopathie diabétique. La seg-
mentation emploie des mesures de désaccord standards et aussi des mesures basées sur la
perception. En considérant juste les petits vaisseaux dans les images de la base de données,
l’amélioration dans le taux de sensibilité que notre méthode apporte par rapport à la méthode
standard de détection multi-niveaux de lignes est de 6.47%. En utilisant les mesures basées
sur la perception, l’amélioration est de 7.8%.
Dans une seconde partie du manuscrit, nous proposons également une méthode pour
caractériser les rétines saines ou anormales. Certaines images contiennent de la néovascula-
risation. La caractérisation des vaisseaux en bonne santé ou anormale constitue une étape
essentielle pour le développement d’un système d’aide au diagnostic. En plus des défis que
posent les petits vaisseaux sains, les néovaisseaux démontrent eux un degré de complexité
encore plus élevé. Ceux-ci forment en effet des réseaux de vaisseaux à la morphologie com-
plexe et inhabituelle, souvent minces et Ă fortes courbures. Les travaux existants se limitent
viii
à l’utilisation de caractéristiques de premier ordre extraites des petits vaisseaux segmentés.
Notre contribution est d’utiliser le vote tensoriel pour isoler les jonctions vasculaires et d’uti-
liser ces jonctions comme points d’intérêts. Nous utilisons ensuite une statistique spatiale
de second ordre calculée sur les jonctions pour caractériser les vaisseaux comme étant sains
ou pathologiques. Notre méthode améliore la sensibilité de la caractérisation de 9.09% par
rapport à une méthode de l’état de l’art.
La méthode développée s’est révélée efficace pour la segmentation des vaisseaux réti-
niens. Des tenseurs d’ordre supérieur ainsi que la mise en œuvre d’un vote par tenseur via
un filtrage orientable pourraient être étudiés pour réduire davantage le temps d’exécution et
résoudre les défis encore présents au niveau des jonctions vasculaires. De plus, la caractéri-
sation pourrait être améliorée pour la détection de la rétinopathie proliférative en utilisant
un apprentissage supervisé incluant des cas de rétinopathie diabétique non proliférative ou
d’autres pathologies. Finalement, l’incorporation des méthodes proposées dans des systèmes
d’aide au diagnostic pourrait favoriser le dépistage régulier pour une détection précoce des
rétinopathies et d’autres pathologies oculaires dans le but de réduire la cessité au sein de la
population.----------ABSTRACT
As an easily accessible site for the direct observation of the circulation system, human retina
can offer a unique insight into diseases development or outcome. Retinal vessels are repre-
sentative of the general condition of the whole systematic circulation, and thus can act as
a "window" to the status of the vascular network in the whole body. Each complication on
the retina can have an adverse impact on the patient’s sight. In this direction, small vessels’
relevance is very high as they are among the first anatomical structures that get affected
as diseases progress. Moreover, changes in the small vessels’ state, appearance, morphology,
functionality, or even growth indicate the severity of the diseases.
This thesis will focus on the retinal lesions due to diabetes, a serious metabolic disease
affecting millions of people around the world. This disorder disturbs the natural blood glucose
levels causing various pathophysiological changes in different systems across the human body.
Diabetic retinopathy is the medical term that describes the condition when the fundus and
the retinal vessels are affected by diabetes. As in other diseases, small vessels play a crucial
role in the onset, the development, and the outcome of the retinopathy. More importantly,
at the latest stage, new small vessels, or neovascularizations, growth constitutes a factor of
significant risk for blindness. Therefore, there is a need to detect all the changes that occur
in the small retinal vessels with the aim of characterizing the vessels to healthy or abnormal.
The characterization, in turn, can facilitate the detection of a specific retinopathy locally,
like the sight-threatening proliferative diabetic retinopathy.
Segmentation techniques can automatically isolate important anatomical structures like
the vessels, and provide this information to the physician to assist him in the final decision. In
comprehensive systems for the automatization of DR detection, small vessels role is significant
as missing them early in a CAD pipeline might lead to an increase in the false positive rate
of red lesions in subsequent steps. So far, the efforts have been concentrated mostly on the
accurate localization of the medium range vessels. In contrast, the existing models are weak
in case of the small vessels. The required generalization to adapt an existing model does not
allow the approaches to be flexible, yet robust to compensate for the increased variability in
the appearance as well as the interference with the background. So far, the current template
models (matched filtering, line detection, and morphological processing) assume a general
shape for the vessels that is not enough to approximate the narrow, curved, characteristics
of the small vessels. Additionally, due to the weak contrast in the small vessel regions,
the current segmentation and the tracking methods produce fragmented or discontinued
results. Alternatively, the small vessel segmentation can be accomplished at the expense of
x
background noise magnification, in the case of using thresholding or the image derivatives
methods. Furthermore, the proposed deformable models are not able to propagate a contour
to the full extent of the vasculature in order to enclose all the small vessels. The deformable
model external forces are ineffective to compensate for the low contrast, the low width, the
high variability in the small vessel appearance, as well as the discontinuities. Internal forces,
also, are not able to impose a global shape constraint to the contour that could be able to
approximate the variability in the appearance of the vasculature in different categories of
vessels. Finally, machine learning approaches require the training of a classifier on a labelled
set. Those sets are difficult to be obtained, especially in the case of the smallest vessels. In
the case of the unsupervised methods, the user has to predefine the number of clusters and
perform an effective initialization of the cluster centers in order to converge to the global
minimum.
This dissertation expanded the previous research work and provides a new segmentation
method for the smallest retinal vessels. Multi-scale line detection (MSLD) is a recent method
that demonstrates good segmentation performance in the retinal images, while tensor voting
is a method first proposed for reconnecting pixels. For the first time, we combined the
line detection with the tensor voting framework. The application of the line detectors has
been proved an effective way to segment medium-sized vessels. Additionally, perceptual
organization approaches like tensor voting, demonstrate increased robustness by combining
information coming from the neighborhood in a hierarchical way. Tensor voting is closer than
standard models to the way human perception functions. As we show, it is a more powerful
tool to segment small vessels than the existing methods. This specific combination allows us
to overcome the apparent fragmentation challenge of the template methods at the smallest
vessels. Moreover, we thresholded the line detection response adaptively to compensate for
non-uniform images. We also combined the two individual methods in a multi-scale scheme
in order to reconnect vessels at variable distances. Finally, we reconstructed the vessels
from their extracted centerlines based on pixel painting as complete geometric information
is required to be able to utilize the segmentation in a CAD system.
The segmentation was validated on a high-resolution fundus image database that in-
cludes diabetic retinopathy images of varying stages, using standard discrepancy as well as
perceptual-based measures. When only the smallest vessels are considered, the improve-
ments in the sensitivity rate for the database against the standard multi-scale line detection
method is 6.47%. For the perceptual-based measure, the improvement is 7.8% against the
basic method.
The second objective of the thesis was to implement a method for the characterization of
isolated retinal areas into healthy or abnormal cases. Some of the original images, from which
xi
these patches are extracted, contain neovascularizations. Investigation of image features
for the vessels characterization to healthy or abnormal constitutes an essential step in the
direction of developing CAD system for the automatization of DR screening. Given that the
amount of data will significantly increase under CAD systems, the focus on this category of
vessels can facilitate the referral of sight-threatening cases to early treatment. In addition
to the challenges that small healthy vessels pose, neovessels demonstrate an even higher
degree of complexity as they form networks of convolved, twisted, looped thin vessels. The
existing work is limited to the use of first-order characteristics extracted from the small
segmented vessels that limits the study of patterns. Our contribution is in using the tensor
voting framework to isolate the retinal vascular junctions and in turn using those junctions
as points of interests. Second, we exploited second-order statistics computed on the junction
spatial distribution to characterize the vessels as healthy or neovascularizations. In fact, the
second-order spatial statistics extracted from the junction distribution are combined with
widely used features to improve the characterization sensitivity by 9.09% over the state of
art.
The developed method proved effective for the segmentation of the retinal vessels. Higher
order tensors along with the implementation of tensor voting via steerable filtering could
be employed to further reduce the execution time, and resolve the challenges at vascular
junctions. Moreover, the characterization could be advanced to the detection of prolifera-
tive retinopathy by extending the supervised learning to include non-proliferative diabetic
retinopathy cases or other pathologies. Ultimately, the incorporation of the methods into
CAD systems could facilitate screening for the effective reduction of the vision-threatening
diabetic retinopathy rates, or the early detection of other than ocular pathologies
Recent Advances in Signal Processing
The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity
Automated Extraction of Road Information from Mobile Laser Scanning Data
Effective planning and management of transportation infrastructure requires adequate geospatial data. Existing geospatial data acquisition techniques based on conventional route surveys are very time consuming, labor intensive, and costly. Mobile laser scanning (MLS) technology enables a rapid collection of enormous volumes of highly dense, irregularly distributed, accurate geo-referenced point cloud data in the format of three-dimensional (3D) point clouds. Today, more and more commercial MLS systems are available for transportation applications. However, many transportation engineers have neither interest in the 3D point cloud data nor know how to transform such data into their computer-aided model (CAD) formatted geometric road information. Therefore, automated methods and software tools for rapid and accurate extraction of 2D/3D road information from the MLS data are urgently needed.
This doctoral dissertation deals with the development and implementation aspects of a novel strategy for the automated extraction of road information from the MLS data. The main features of this strategy include: (1) the extraction of road surfaces from large volumes of MLS point clouds, (2) the generation of 2D geo-referenced feature (GRF) images from the road-surface data, (3) the exploration of point density and intensity of MLS data for road-marking extraction, and (4) the extension of tensor voting (TV) for curvilinear pavement crack extraction. In accordance with this strategy, a RoadModeler prototype with three computerized algorithms was developed. They are: (1) road-surface extraction, (2) road-marking extraction, and (3) pavement-crack extraction. Four main contributions of this development can be summarized as follows.
Firstly, a curb-based approach to road surface extraction with assistance of the vehicle’s trajectory is proposed and implemented. The vehicle’s trajectory and the function of curbs that separate road surfaces from sidewalks are used to efficiently separate road-surface points from large volume of MLS data. The accuracy of extracted road surfaces is validated with manually selected reference points.
Secondly, the extracted road enables accurate detection of road markings and cracks for transportation-related applications in road traffic safety. To further improve computational efficiency, the extracted 3D road data are converted into 2D image data, termed as a GRF image. The GRF image of the extracted road enables an automated road-marking extraction algorithm and an automated crack detection algorithm, respectively.
Thirdly, the automated road-marking extraction algorithm applies a point-density-dependent, multi-thresholding segmentation to the GRF image to overcome unevenly distributed intensity caused by the scanning range, the incidence angle, and the surface characteristics of an illuminated object. The morphological operation is then implemented to deal with the presence of noise and incompleteness of the extracted road markings.
Fourthly, the automated crack extraction algorithm applies an iterative tensor voting (ITV) algorithm to the GRF image for crack enhancement. The tensor voting, a perceptual organization method that is capable of extracting curvilinear structures from the noisy and corrupted background, is explored and extended into the field of crack detection.
The successful development of three algorithms suggests that the RoadModeler strategy offers a solution to the automated extraction of road information from the MLS data. Recommendations are given for future research and development to be conducted to ensure that this progress goes beyond the prototype stage and towards everyday use
Recommended from our members
A Hybrid Multibiometric System for Personal Identification Based on Face and Iris Traits. The Development of an automated computer system for the identification of humans by integrating facial and iris features using Localization, Feature Extraction, Handcrafted and Deep learning Techniques.
Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level.
Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image.
Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level.
Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image.
Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.Higher Committee for Education Development in Ira