1,081 research outputs found

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    A FUSION OF IMAGE PROCESSING AND DEEP LEARNING FOR COVID19 DETECTION USING 2D ITERATIVE CONVOLUTIONAL NEURAL NETWORK

    Get PDF
    Covid-19 still continues to be cataclysmic danger to humankind even after the discovery of vaccines because of passing of similar mutants which leads to creation of new variants. Image processing techniques are fused with a deep learning model to bring out the detection of covid19. A Raw Low Dose CT database Images (RLD-CTDI) are used along with the CAD approach to bring out a novel automatic framework. Raw Ct images in general have some clamors such as Gaussian, pepper & salt; speckle noises etc or might even be affected by shaky voltage disturbance. To remove these clamors and disturbances 2D Improved Anisotropic Diffusion Bilateral Filter (2D IADBF) is used which restores the image. The image is further pre-processed by using 2D Edge Preservation Efficient Histogram Processing to preserve the edges. After the pre-processing steps a clear noise-free image is obtained for further processing like clustering and thresholding. Clustering is done using 2D Hybrid-Fuzzy C-Means Algorithm (2D HFCM) to obtain disease clusters and thresholding is done using 2D Adaptive OTSU Thresholding to extract the Region of Interest (ROI). Using the ROI, Feature extraction is applied using Gray-Level Co-Occurrence Matrix Histogram Of Gradient (GLCM HOG) calculation to obtain features. These features are fed as input to the deep learning model.2D Iterative Convolutional Neural Network is used for classification of the image which categorizes the CT image into covid affected / Non-covid affected image

    Recent Progress in Transformer-based Medical Image Analysis

    Full text link
    The transformer is primarily used in the field of natural language processing. Recently, it has been adopted and shows promise in the computer vision (CV) field. Medical image analysis (MIA), as a critical branch of CV, also greatly benefits from this state-of-the-art technique. In this review, we first recap the core component of the transformer, the attention mechanism, and the detailed structures of the transformer. After that, we depict the recent progress of the transformer in the field of MIA. We organize the applications in a sequence of different tasks, including classification, segmentation, captioning, registration, detection, enhancement, localization, and synthesis. The mainstream classification and segmentation tasks are further divided into eleven medical image modalities. A large number of experiments studied in this review illustrate that the transformer-based method outperforms existing methods through comparisons with multiple evaluation metrics. Finally, we discuss the open challenges and future opportunities in this field. This task-modality review with the latest contents, detailed information, and comprehensive comparison may greatly benefit the broad MIA community.Comment: Computers in Biology and Medicine Accepte

    Multi-Scale Feature Fusion using Parallel-Attention Block for COVID-19 Chest X-ray Diagnosis

    Full text link
    Under the global COVID-19 crisis, accurate diagnosis of COVID-19 from Chest X-ray (CXR) images is critical. To reduce intra- and inter-observer variability, during the radiological assessment, computer-aided diagnostic tools have been utilized to supplement medical decision-making and subsequent disease management. Computational methods with high accuracy and robustness are required for rapid triaging of patients and aiding radiologists in the interpretation of the collected data. In this study, we propose a novel multi-feature fusion network using parallel attention blocks to fuse the original CXR images and local-phase feature-enhanced CXR images at multi-scales. We examine our model on various COVID-19 datasets acquired from different organizations to assess the generalization ability. Our experiments demonstrate that our method achieves state-of-art performance and has improved generalization capability, which is crucial for widespread deployment.Comment: Accepted for publication at the Journal of Machine Learning for Biomedical Imaging (MELBA) https://melba-journal.org/2023:00

    A Distance Transformation Deep Forest Framework With Hybrid-Feature Fusion for CXR Image Classification

    Get PDF
    Detecting pneumonia, especially coronavirus disease 2019 (COVID-19), from chest X-ray (CXR) images is one of the most effective ways for disease diagnosis and patient triage. The application of deep neural networks (DNNs) for CXR image classification is limited due to the small sample size of the well-curated data. To tackle this problem, this article proposes a distance transformation-based deep forest framework with hybrid-feature fusion (DTDF-HFF) for accurate CXR image classification. In our proposed method, hybrid features of CXR images are extracted in two ways: hand-crafted feature extraction and multigrained scanning. Different types of features are fed into different classifiers in the same layer of the deep forest (DF), and the prediction vector obtained at each layer is transformed to form distance vector based on a self-adaptive scheme. The distance vectors obtained by different classifiers are fused and concatenated with the original features, then input into the corresponding classifier at the next layer. The cascade grows until DTDF-HFF can no longer gain benefits from the new layer. We compare the proposed method with other methods on the public CXR datasets, and the experimental results show that the proposed method can achieve state-of-the art (SOTA) performance. The code will be made publicly available at https://github.com/hongqq/DTDF-HFF
    corecore