5 research outputs found

    On Detecting Faces And Classifying Facial Races With Partial Occlusions And Pose Variations

    Get PDF
    In this dissertation, we present our contributions in face detection and facial race classification. Face detection in unconstrained images is a traditional problem in computer vision community. Challenges still remain. In particular, the detection of partially occluded faces with pose variations has not been well addressed. In the first part of this dissertation, our contributions are three-fold. First, we introduce our four image datasets consisting of large-scale labeled face dataset, noisy large-scale labeled non-face dataset, CrowdFaces dataset, and CrowdNonFaces dataset intended to be used for face detection training. Second, we improve Viola-Jones (VJ) face detection results by first training a Convolutional Neural Network (CNN) model on our noisy datasets. We show our improvement over the VJ face detector on AFW face detection benchmark dataset. However, existing partial occluded face detection methods require training several models, computing hand-crafted features, or both. Hence, we thirdly propose our Large-Scale Deep Learning (LSDL), a method that does not require training several CNN models or hand-crafted features computations to detect faces. Our LSDL face detector is trained on a single CNN model to detect unconstrained multi-view partially occluded and non-partially occluded faces. The model is trained with a large number of face training examples that cover most partial occlusions and non-partial occlusions facial appearances. The LSDL face detection method is achieved by selecting detection windows with the highest confidence scores using a threshold. Our evaluation results show that our LSDL method achieves the best performance on AFW dataset and a comparable performance on FDDB dataset among state-of-the-art face detection methods without manually extending or adjusting the square detection bounding boxes. Many biometrics and security systems use facial information to obtain an individual identification and recognition. Classifying a race from a face image can provide a strong hint to search for facial identity and criminal identification. Current facial race classification methods are confined only to constrained non-partially occluded frontal faces. Challenges remain under unconstrained environments such as partial occlusions and pose variations, low illuminations, and small scales. In the second part of the dissertation, we propose a CNN model to classify facial races with partial occlusions and pose variations. The proposed model is trained using a broad and balanced racial distributed face image dataset. The model is trained on four major human races, Caucasian, Indian, Mongolian, and Negroid. Our model is evaluated against the state-of-the-art methods on a constrained face test dataset. Also, an evaluation of the proposed model and human performance is conducted and compared on our new unconstrained facial race benchmark (CIMN) dataset. Our results show that our model achieves 95.1% of race classification accuracy in the constrained environment. Furthermore, the model achieves a comparable accuracy of race classification compared to human performance on the current challenges in the unconstrained environment

    ViT-DeiT: An Ensemble Model for Breast Cancer Histopathological Images Classification

    Full text link
    Breast cancer is the most common cancer in the world and the second most common type of cancer that causes death in women. The timely and accurate diagnosis of breast cancer using histopathological images is crucial for patient care and treatment. Pathologists can make more accurate diagnoses with the help of a novel approach based on image processing. This approach is an ensemble model of two types of pre-trained vision transformer models, namely, Vision Transformer and Data-Efficient Image Transformer. The proposed ensemble model classifies breast cancer histopathology images into eight classes, four of which are categorized as benign, whereas the others are categorized as malignant. A public dataset was used to evaluate the proposed model. The experimental results showed 98.17% accuracy, 98.18% precision, 98.08% recall, and a 98.12% F1 score.Comment: 7 pages, 10 figures, 7 table

    Machine and Deep Learning towards COVID-19 Diagnosis and Treatment: Survey, Challenges, and Future Directions

    No full text
    With many successful stories, machine learning (ML) and deep learning (DL) have been widely used in our everyday lives in a number of ways. They have also been instrumental in tackling the outbreak of Coronavirus (COVID-19), which has been happening around the world. The SARS-CoV-2 virus-induced COVID-19 epidemic has spread rapidly across the world, leading to international outbreaks. The COVID-19 fight to curb the spread of the disease involves most states, companies, and scientific research institutions. In this research, we look at the Artificial Intelligence (AI)-based ML and DL methods for COVID-19 diagnosis and treatment. Furthermore, in the battle against COVID-19, we summarize the AI-based ML and DL methods and the available datasets, tools, and performance. This survey offers a detailed overview of the existing state-of-the-art methodologies for ML and DL researchers and the wider health community with descriptions of how ML and DL and data can improve the status of COVID-19, and more studies in order to avoid the outbreak of COVID-19. Details of challenges and future directions are also provided

    Prostate cancer malignancy detection and localization from mpMRI using auto-deep learning as one step closer to clinical utilization

    Get PDF
    Abstract Automatic diagnosis of malignant prostate cancer patients from mpMRI has been studied heavily in the past years. Model interpretation and domain drift have been the main road blocks for clinical utilization. As an extension from our previous work we trained on a public cohort with 201 patients and the cropped 2.5D slices of the prostate glands were used as the input, and the optimal model were searched in the model space using autoKeras. As an innovative move, peripheral zone (PZ) and central gland (CG) were trained and tested separately, the PZ detector and CG detector were demonstrated effective in highlighting the most suspicious slices out of a sequence, hopefully to greatly ease the workload for the physicians
    corecore