4 research outputs found

    Improving performance of CNN to predict likelihood of COVID-19 using chest X-ray images with preprocessing algorithms

    Full text link
    As the rapid spread of coronavirus disease (COVID-19) worldwide, chest X-ray radiography has also been used to detect COVID-19 infected pneumonia and assess its severity or monitor its prognosis in the hospitals due to its low cost, low radiation dose, and wide accessibility. However, how to more accurately and efficiently detect COVID-19 infected pneumonia and distinguish it from other community-acquired pneumonia remains a challenge. In order to address this challenge, we in this study develop and test a new computer-aided diagnosis (CAD) scheme. It includes several image pre-processing algorithms to remove diaphragms, normalize image contrast-to-noise ratio, and generate three input images, then links to a transfer learning based convolutional neural network (a VGG16 based CNN model) to classify chest X-ray images into three classes of COVID-19 infected pneumonia, other community-acquired pneumonia and normal (non-pneumonia) cases. To this purpose, a publicly available dataset of 8,474 chest X-ray images is used, which includes 415 confirmed COVID-19 infected pneumonia, 5,179 community-acquired pneumonia, and 2,880 non-pneumonia cases. The dataset is divided into two subsets with 90% and 10% of images in each subset to train and test the CNN-based CAD scheme. The testing results achieve 94.0% of overall accuracy in classifying three classes and 98.6% accuracy in detecting Covid-19 infected cases. Thus, the study demonstrates the feasibility of developing a CAD scheme of chest X-ray images and providing radiologists useful decision-making supporting tools in detecting and diagnosis of COVID-19 infected pneumonia.Comment: 11 pages, 5 figures, 2 table

    An approach to human iris recognition using quantitative analysis of image features and machine learning

    Full text link
    The Iris pattern is a unique biological feature for each individual, making it a valuable and powerful tool for human identification. In this paper, an efficient framework for iris recognition is proposed in four steps. (1) Iris segmentation (using a relative total variation combined with Coarse Iris Localization), (2) feature extraction (using Shape&density, FFT, GLCM, GLDM, and Wavelet), (3) feature reduction (employing Kernel-PCA) and (4) classification (applying multi-layer neural network) to classify 2000 iris images of CASIA-Iris-Interval dataset obtained from 200 volunteers. The results confirm that the proposed scheme can provide a reliable prediction with an accuracy of up to 99.64%

    Deep learning denoising for EOG artifacts removal from EEG signals

    Full text link
    There are many sources of interference encountered in the electroencephalogram (EEG) recordings, specifically ocular, muscular, and cardiac artifacts. Rejection of EEG artifacts is an essential process in EEG analysis since such artifacts cause many problems in EEG signals analysis. One of the most challenging issues in EEG denoising processes is removing the ocular artifacts where Electrooculographic (EOG), and EEG signals have an overlap in both frequency and time domains. In this paper, we build and train a deep learning model to deal with this challenge and remove the ocular artifacts effectively. In the proposed scheme, we convert each EEG signal to an image to be fed to a U-NET model, which is a deep learning model usually used in image segmentation tasks. We proposed three different schemes and made our U-NET based models learn to purify contaminated EEG signals similar to the process used in the image segmentation process. The results confirm that one of our schemes can achieve a reliable and promising accuracy to reduce the Mean square error between the target signal (Pure EEGs) and the predicted signal (Purified EEGs)

    Image quality enhancement in wireless capsule endoscopy with adaptive fraction gamma transformation and unsharp masking filter

    Full text link
    Wireless Capsule Endoscopy (WCE) presented in 2001 as one of the key approaches to observe the entire gastrointestinal (GI) tract, generally the small bowels. It has been used to detect diseases in the gastrointestinal tract. Endoscopic image analysis is still a required field with many open problems. The quality of many images it produced is rather unacceptable due to the nature of this imaging system, which causes some issues to prognosticate by physicians and computer-aided diagnosis. In this paper, a novel technique is proposed to improve the quality of images captured by the WCE. More specifically, it enhanced the brightness, contrast, and preserve the color information while reducing its computational complexity. Furthermore, the experimental results of PSNR and SSIM confirm that the error rate in this method is near to the ground and negligible. Moreover, the proposed method improves intensity restricted average local entropy (IRMLE) by 22%, color enhancement factor (CEF) by 10%, and can keep the lightness of image effectively. The performances of our method have better visual quality and objective assessments in compare to the state-of-art methods.Comment: 7 pages, 7 figure
    corecore