4,003 research outputs found

    Multi-modal image classification of COVID-19 cases using computed tomography and X-rays scans

    Get PDF
    COVID pandemic across the world and the emergence of new variants have intensified the need to identify COVID-19 cases quickly and efficiently. In this paper, a novel dual-mode multi-modal approach is presented to detect a covid patient. This has been done using the combination of image of the chest X-ray/CT scan and the clinical notes provided with the scan. Data augmentation techniques are used to extrapolate the dataset. Five different types of image and text models have been employed, including transfer learning. The binary cross entropy loss function and the adam optimizer are used to compile all of these models. The multi-modal is also tried out with existing pre-trained models such as: VGG16, ResNet50, InceptionResNetV2 and MobileNetV2. The final multi-modal gives an accuracy of 97.8% on the testing data. The study provides a different approach to identifying COVID-19 cases using just the scan images and the corresponding notes

    Convolutional Neural Network based Malignancy Detection of Pulmonary Nodule on Computer Tomography

    Get PDF
    Without performing biopsy that could lead physical damages to nerves and vessels, Computerized Tomography (CT) is widely used to diagnose the lung cancer due to the high sensitivity of pulmonary nodule detection. However, distinguishing pulmonary nodule in-between malignant and benign is still not an easy task. As the CT scans are mostly in relatively low resolution, it is not easy for radiologists to read the details of the scan image. In the past few years, the continuing rapid growth of CT scan analysis system has generated a pressing need for advanced computational tools to extract useful features to assist the radiologist in reading progress. Computer-aided detection (CAD) systems have been developed to reduce observational oversights by identifying the suspicious features that a radiologist looks for during case review. Most previous CAD systems rely on low-level non-texture imaging features such as intensity, shape, size or volume of the pulmonary nodules. However, the pulmonary nodules have a wide variety in shapes and sizes, and also the high visual similarities between benign and malignant patterns, so relying on non-texture imaging features is difficult for diagnosis of the nodule types. To overcome the problem of non-texture imaging features, more recent CAD systems adopted the supervised or unsupervised learning scheme to translate the content of the nodules into discriminative features. Such features enable high-level imaging features highly correlated with shape and texture. Convolutional neural networks (ConvNets), supervised methods related to deep learning, have been improved rapidly in recent years. Due to their great success in computer vision tasks, they are also expected to be helpful in medical imaging. In this thesis, a CAD based on a deep convolutional neural network (ConvNet) is designed and evaluated for malignant pulmonary nodules on computerized tomography. The proposed ConvNet, which is the core component of the proposed CAD system, is trained on the LUNGx challenge database to classify benign and malignant pulmonary nodules on CT. The architecture of the proposed ConvNet consists of 3 convolutional layers with maximum pooling operations and rectified linear units (ReLU) activations, followed by 2 denser layers with full-connectivities, and the architecture is carefully tailored for pulmonary nodule classification by considering the problems of over-fitting, receptive field, and imbalanced data. The proposed CAD system achieved the sensitivity of 0.896 and specificity of 8.78 at the optimal cut-off point of the receiver operating characteristic curve (ROC) with the area under the curve (AUC) of 0.920. The testing results showed that the proposed ConvNet achieves 10% higher AUC compared to the state-of-the-art work related to the unsupervised method. By integrating the proposed highly accurate ConvNet, the proposed CAD system also outperformed the other state-of-the-art ConvNets explicitly designed for diagnosis of pulmonary nodules detection or classification

    Artificial Intelligence in Radiation Therapy

    Get PDF
    Artificial intelligence (AI) has great potential to transform the clinical workflow of radiotherapy. Since the introduction of deep neural networks, many AI-based methods have been proposed to address challenges in different aspects of radiotherapy. Commercial vendors have started to release AI-based tools that can be readily integrated to the established clinical workflow. To show the recent progress in AI-aided radiotherapy, we have reviewed AI-based studies in five major aspects of radiotherapy including image reconstruction, image registration, image segmentation, image synthesis, and automatic treatment planning. In each section, we summarized and categorized the recently published methods, followed by a discussion of the challenges, concerns, and future development. Given the rapid development of AI-aided radiotherapy, the efficiency and effectiveness of radiotherapy in the future could be substantially improved through intelligent automation of various aspects of radiotherapy

    Data-Driven Deep Learning-Based Analysis on THz Imaging

    Get PDF
    Breast cancer affects about 12.5% of women population in the United States. Surgical operations are often needed post diagnosis. Breast conserving surgery can help remove malignant tumors while maximizing the remaining healthy tissues. Due to lacking effective real-time tumor analysis tools and a unified operation standard, re-excision rate could be higher than 30% among breast conserving surgery patients. This results in significant physical, physiological, and financial burdens to those patients. This work designs deep learning-based segmentation algorithms that detect tissue type in excised tissues using pulsed THz technology. This work evaluates the algorithms for tissue type classification task among freshly excised tumor samples. Freshly excised tumor samples are more challenging than formalin-fixed, paraffin-embedded (FFPE) block sample counterparts due to excessive fluid, image registration difficulties, and lacking trustworthy pixelwise labels of each tissue sample. Additionally, evaluating freshly excised tumor samples has profound meaning of potentially applying pulsed THz scan technology to breast conserving cancer surgery in operating room. Recently, deep learning techniques have been heavily researched since GPU based computation power becomes economical and stronger. This dissertation revisits breast cancer tissue segmentation related problems using pulsed terahertz wave scan technique among murine samples and applies recent deep learning frameworks to enhance the performance in various tasks. This study first performs pixelwise classification on terahertz scans with CNN-based neural networks and time-frequency based feature tensors using wavelet transformation. This study then explores the neural network based semantic segmentation strategy performing on terahertz scans considering spatial information and incorporating noisy label handling with label correction techniques. Additionally, this study performs resolution restoration for visual enhancement on terahertz scans using an unsupervised, generative image-to-image translation methodology. This work also proposes a novel data processing pipeline that trains a semantic segmentation network using only neural generated synthetic terahertz scans. The performance is evaluated using various evaluation metrics among different tasks
    • …
    corecore