2 research outputs found

    Modified fuzzy rough set technique with stacked autoencoder model for magnetic resonance imaging based breast cancer detection

    Get PDF
    Breast cancer is the common cancer in women, where early detection reduces the mortality rate. The magnetic resonance imaging (MRI) images are efficient in analyzing breast cancer, but it is hard to identify the abnormalities. The manual breast cancer detection in MRI images is inefficient; therefore, a deep learning-based system is implemented in this manuscript. Initially, the visual quality improvement is done using region growing and adaptive histogram equalization (AHE), and then, the breast lesion is segmented by Otsu thresholding with morphological transform. Next, the features are extracted from the segmented lesion, and a modified fuzzy rough set technique is proposed to reduce the dimensions of the extracted features that decreases the system complexity and computational time. The active features are fed to the stacked autoencoder for classifying the benign and malignant classes. The results demonstrated that the proposed model attained 99% and 99.22% of classification accuracy on the benchmark datasets, which are higher related to the comparative classifiers: decision tree, naïve Bayes, random forest and k-nearest neighbor (KNN). The obtained results state that the proposed model superiorly screens and detects the breast lesions that assists clinicians in effective therapeutic intervention and timely treatment

    A face recognition system using convolutional feature extraction with linear collaborative discriminant regression classification

    Get PDF
    Face recognition is one of the important biometric authentication research areas for security purposes in many fields such as pattern recognition and image processing. However, the human face recognitions have the major problem in machine learning and deep learning techniques, since input images vary with poses of people, different lighting conditions, various expressions, ages as well as illumination conditions and it makes the face recognition process poor in accuracy. In the present research, the resolution of the image patches is reduced by the max pooling layer in convolutional neural network (CNN) and also used to make the model robust than other traditional feature extraction technique called local multiple pattern (LMP). The extracted features are fed into the linear collaborative discriminant regression classification (LCDRC) for final face recognition. Due to optimization using CNN in LCDRC, the distance ratio between the classes has maximized and the distance of the features inside the class reduces. The results stated that the CNN-LCDRC achieved 93.10% and 87.60% of mean recognition accuracy, where traditional LCDRC achieved 83.35% and 77.70% of mean recognition accuracy on ORL and YALE databases respectively for the training number 8 (i.e. 80% of training and 20% of testing data)
    corecore