207 research outputs found

    UFPR-Periocular: A Periocular Dataset Collected by Mobile Devices in Unconstrained Scenarios

    Full text link
    Recently, ocular biometrics in unconstrained environments using images obtained at visible wavelength have gained the researchers' attention, especially with images captured by mobile devices. Periocular recognition has been demonstrated to be an alternative when the iris trait is not available due to occlusions or low image resolution. However, the periocular trait does not have the high uniqueness presented in the iris trait. Thus, the use of datasets containing many subjects is essential to assess biometric systems' capacity to extract discriminating information from the periocular region. Also, to address the within-class variability caused by lighting and attributes in the periocular region, it is of paramount importance to use datasets with images of the same subject captured in distinct sessions. As the datasets available in the literature do not present all these factors, in this work, we present a new periocular dataset containing samples from 1,122 subjects, acquired in 3 sessions by 196 different mobile devices. The images were captured under unconstrained environments with just a single instruction to the participants: to place their eyes on a region of interest. We also performed an extensive benchmark with several Convolutional Neural Network (CNN) architectures and models that have been employed in state-of-the-art approaches based on Multi-class Classification, Multitask Learning, Pairwise Filters Network, and Siamese Network. The results achieved in the closed- and open-world protocol, considering the identification and verification tasks, show that this area still needs research and development

    A Review on Machine Learning Methods in Diabetic Retinopathy Detection

    Get PDF
    Ocular disorders have a broad spectrum. Some of them, such as Diabetic Retinopathy, are more common in low-income or low-resource countries. Diabetic Retinopathy is a cause related to vision loss and ocular impairment in the world. By identifying the symptoms in the early stages, it is possible to prevent the progress of the disease and also reach blindness. Considering the prevalence of different branches of Artificial Intelligence in many fields, including medicine, and the significant progress achieved in the use of big data to investigate ocular impairments, the potential of Artificial Intelligence algorithms to process and analyze Fundus images was used to identify symptoms associated with Diabetic Retinopathy. Under the studies, the proposed models for transformers provide better interpretability for doctors and scientists. Artificial Intelligence algorithms are also helpful in anticipating future health issues after appraising premature cases of the ailment. Especially in ophthalmology, a trustworthy diagnosis of visual outcomes helps physicians in advising disease and clinical decision-making while reducing health management costs

    Intellectual System Diagnostics Glaucoma

    Get PDF
    Glaucoma is a chronic eye disease that can lead to permanent vision loss. However, glaucoma is a difficult disease to diagnose because there is no pattern in the distribution of nerve fibers in the ocular fundus. Spectral analysis of the ocular fundus images was performed using the Eidos intelligent system. From the ACRIMA eye image database, 90.7% of healthy eye images were recognized with an average similarity score of 0.588 and 74.42% of glaucoma eye images with an average similarity score of 0.558. The reliability of eye image recognition can be achieved by increasing the number of digitized parameters of eye images obtained, for example, by optical coherence tomography. The research contribution is the digital processing of fundus graphic images by the intelligent system “Eidos”. The scientific contribution lies in the automation of the glaucoma diagnosis process using digitized data. The results of the study can be used at medical faculties of universities to carry out automated diagnostics of glaucoma

    Human Age and Gender Classification using Convolutional Neural Networks

    Get PDF
    In a world relying ever more on human classification, this papers aims to improve on age and gender image classification through the use of Convolutional Neural Networks (CNN). Age and gender classification has become a popular area of study in the past number of years however there are still improvements to be made, particularly in the area of age classification. This research paper aims to test the currently accepted fact that CNN models are the superior model type for image classification by comparing CNN performance against Support Vector Machine performance on the same dataset. Using the Adience image classification dataset, this research also focuses on the implementation of data augmentation techniques, some more novel than others, as a means of improving CNN performance. In terms of standard popular methods of augmentation, image mirroring and image rotation were applied. As well as these, a more novel approach to augmentation was applied to the area of age classification. This technique was completed using Faceapp, an AI image editor in the form of a mobile application. This application allows for the placement of ”filters” on images of human beings in order to alter their appearance. The results of the data augmented models were superior to that of the standard CNN models with gender classification improving by 2.6% while age classification improved by 7.1%. The results of this research establish the potential for further improvements through the inclusion of more augmentation techniques or through the use of more filter types provided in the Faceapp application

    Machine Learning Approaches for Automated Glaucoma Detection using Clinical Data and Optical Coherence Tomography Images

    Full text link
    Glaucoma is a multi-factorial, progressive blinding optic-neuropathy. A variety of factors, including genetics, vasculature, anatomy, and immune factors, are involved. Worldwide more than 80 million people are affected by glaucoma, and around 300,000 in Australia, where 50% remain undiagnosed. Untreated glaucoma can lead to blindness. Early detection by Artificial intelligence (AI) is crucial to accelerate the diagnosis process and can prevent further vision loss. Many proposed AI systems have shown promising performance for automated glaucoma detection using two-dimensional (2D) data. However, only a few studies had optimistic outcomes for glaucoma detection and staging. Moreover, the automated AI system still faces challenges in diagnosing at the clinicians’ level due to the lack of interpretability of the ML algorithms and integration of multiple clinical data. AI technology would be welcomed by doctors and patients if the "black box" notion is overcome by developing an explainable, transparent AI system with similar pathological markers used by clinicians as the sign of early detection and progression of glaucomatous damage. Therefore, the thesis aimed to develop a comprehensive AI model to detect and stage glaucoma by incorporating a variety of clinical data and utilising advanced data analysis and machine learning (ML) techniques. The research first focuses on optimising glaucoma diagnostic features by combining structural, functional, demographic, risk factor, and optical coherence tomography (OCT) features. The significant features were evaluated using statistical analysis and trained in ML algorithms to observe the detection performance. Three crucial structural ONH OCT features: cross-sectional 2D radial B-scan, 3D vascular angiography and temporal-superior-nasal-inferior-temporal (TSNIT) B-scan, were analysed and trained in explainable deep learning (DL) models for automated glaucoma prediction. The explanation behind the decision making of DL models were successfully demonstrated using the feature visualisation. The structural features or distinguished affected regions of TSNIT OCT scans were precisely localised for glaucoma patients. This is consistent with the concept of explainable DL, which refers to the idea of making the decision-making processes of DL models transparent and interpretable to humans. However, artifacts and speckle noise often result in misinterpretation of the TSNIT OCT scans. This research also developed an automated DL model to remove the artifacts and noise from the OCT scans, facilitating error-free retinal layers segmentation, accurate tissue thickness estimation and image interpretation. Moreover, to monitor and grade glaucoma severity, the visual field (VF) test is commonly followed by clinicians for treatment and management. Therefore, this research uses the functional features extracted from VF images to train ML algorithms for staging glaucoma from early to advanced/severe stages. Finally, the selected significant features were used to design and develop a comprehensive AI model to detect and grade glaucoma stages based on the data quantity and availability. In the first stage, a DL model was trained with TSNIT OCT scans, and its output was combined with significant structural and functional features and trained in ML models. The best-performed ML model achieved an area under the curve (AUC): 0.98, an accuracy of 97.2%, a sensitivity of 97.9%, and a specificity of 96.4% for detecting glaucoma. The model achieved an overall accuracy of 90.7% and an F1 score of 84.0% for classifying normal, early, moderate, and advanced-stage glaucoma. In conclusion, this thesis developed and proposed a comprehensive, evidence-based AI model that will solve the screening problem for large populations and relieve experts from manually analysing a slew of patient data and associated misinterpretation problems. Moreover, this thesis demonstrated three structural OCT features that could be added as excellent diagnostic markers for precise glaucoma diagnosis
    • …
    corecore