136 research outputs found

    Medical imaging analysis with artificial neural networks

    Get PDF
    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging

    Deep learning in medical imaging and radiation therapy

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd

    Artificial Intelligence in Image-Based Screening, Diagnostics, and Clinical Care of Cardiopulmonary Diseases

    Get PDF
    Cardiothoracic and pulmonary diseases are a significant cause of mortality and morbidity worldwide. The COVID-19 pandemic has highlighted the lack of access to clinical care, the overburdened medical system, and the potential of artificial intelligence (AI) in improving medicine. There are a variety of diseases affecting the cardiopulmonary system including lung cancers, heart disease, tuberculosis (TB), etc., in addition to COVID-19-related diseases. Screening, diagnosis, and management of cardiopulmonary diseases has become difficult owing to the limited availability of diagnostic tools and experts, particularly in resource-limited regions. Early screening, accurate diagnosis and staging of these diseases could play a crucial role in treatment and care, and potentially aid in reducing mortality. Radiographic imaging methods such as computed tomography (CT), chest X-rays (CXRs), and echo ultrasound (US) are widely used in screening and diagnosis. Research on using image-based AI and machine learning (ML) methods can help in rapid assessment, serve as surrogates for expert assessment, and reduce variability in human performance. In this Special Issue, “Artificial Intelligence in Image-Based Screening, Diagnostics, and Clinical Care of Cardiopulmonary Diseases”, we have highlighted exemplary primary research studies and literature reviews focusing on novel AI/ML methods and their application in image-based screening, diagnosis, and clinical management of cardiopulmonary diseases. We hope that these articles will help establish the advancements in AI

    Synthesize and Segment: Towards Improved Catheter Segmentation via Adversarial Augmentation

    Get PDF
    Automatic catheter and guidewire segmentation plays an important role in robot-assisted interventions that are guided by fluoroscopy. Existing learning based methods addressing the task of segmentation or tracking are often limited by the scarcity of annotated samples and difficulty in data collection. In the case of deep learning based methods, the demand for large amounts of labeled data further impedes successful application. We propose a synthesize and segment approach with plug in possibilities for segmentation to address this. We show that an adversarially learned image-to-image translation network can synthesize catheters in X-ray fluoroscopy enabling data augmentation in order to alleviate a low data regime. To make realistic synthesized images, we train the translation network via a perceptual loss coupled with similarity constraints. Then existing segmentation networks are used to learn accurate localization of catheters in a semi-supervised setting with the generated images. The empirical results on collected medical datasets show the value of our approach with significant improvements over existing translation baseline methods. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.1

    3D Shape Reconstruction of Knee Bones from Low Radiation X-ray Images Using Deep Learning

    Get PDF
    Understanding the bone kinematics of the human knee during dynamic motions is necessary to evaluate the pathological conditions, design knee prosthesis, orthosis and surgical treatments such as knee arthroplasty. Also, knee bone kinematics is essential to assess the biofidelity of the computational models. Kinematics of the human knee has been reported in the literature either using in vitro or in vivo methodologies. In vivo methodology is widely preferred due to biomechanical accuracies. However, it is challenging to obtain the kinematic data in vivo due to limitations in existing methods. One of the several existing methods used in such application is using X-ray fluoroscopy imaging, which allows for the non-invasive quantification of bone kinematics. In the fluoroscopy imaging method, due to procedural simplicity and low radiation exposure, single-plane fluoroscopy (SF) is the preferred tool to study the in vivo kinematics of the knee joint. Evaluation of the three-dimensional (3D) kinematics from the SF imagery is possible only if prior knowledge of the shape of the knee bones is available. The standard technique for acquiring the knee shape is to either segment Magnetic Resonance (MR) images, which is expensive to procure, or Computed Tomography (CT) images, which exposes the subjects to a heavy dose of ionizing radiation. Additionally, both the segmentation procedures are time-consuming and labour-intensive. An alternative technique that is rarely used is to reconstruct the knee shape from the SF images. It is less expensive than MR imaging, exposes the subjects to relatively lower radiation than CT imaging, and since the kinematic study and the shape reconstruction could be carried out using the same device, it could save a considerable amount of time for the researchers and the subjects. However, due to low exposure levels, SF images are often characterized by a low signal-to-noise ratio, making it difficult to extract the required information to reconstruct the shape accurately. In comparison to conventional X-ray images, SF images are of lower quality and have less detail. Additionally, existing methods for reconstructing the shape of the knee remain generally inconvenient since they need a highly controlled system: images must be captured from a calibrated device, care must be taken while positioning the subject's knee in the X-ray field to ensure image consistency, and user intervention and expert knowledge is required for 3D reconstruction. In an attempt to simplify the existing process, this thesis proposes a new methodology to reconstruct the 3D shape of the knee bones from multiple uncalibrated SF images using deep learning. During the image acquisition using the SF, the subjects in this approach can freely rotate their leg (in a fully extended, knee-locked position), resulting in several images captured in arbitrary poses. Relevant features are extracted from these images using a novel feature extraction technique before feeding it to a custom-built Convolutional Neural Network (CNN). The network, without further optimization, directly outputs a meshed 3D surface model of the subject's knee joint. The whole procedure could be completed in a few minutes. The robust feature extraction technique can effectively extract relevant information from a range of image qualities. When tested on eight unseen sets of SF images with known true geometry, the network reconstructed knee shape models with a shape error (RMSE) of 1.91± 0.30 mm for the femur, 2.3± 0.36 mm for the tibia and 3.3± 0.53 mm for the patella. The error was calculated after rigidly aligning (scale, rotation, and translation) each of the reconstructed shape models with the corresponding known true geometry (obtained through MRI segmentation). Based on a previous study that examined the influence of reconstructed shape accuracy on the precision of the evaluation of tibiofemoral kinematics, the shape accuracy of the proposed methodology might be adequate to precisely track the bone kinematics, although further investigation is required

    The Effectiveness of Transfer Learning Systems on Medical Images

    Get PDF
    Deep neural networks have revolutionized the performances of many machine learning tasks such as medical image classification and segmentation. Current deep learning (DL) algorithms, specifically convolutional neural networks are increasingly becoming the methodological choice for most medical image analysis. However, training these deep neural networks requires high computational resources and very large amounts of labeled data which is often expensive and laborious. Meanwhile, recent studies have shown the transfer learning (TL) paradigm as an attractive choice in providing promising solutions to challenges of shortage in the availability of labeled medical images. Accordingly, TL enables us to leverage the knowledge learned from related data to solve a new problem. The objective of this dissertation is to examine the effectiveness of TL systems on medical images. First, a comprehensive systematic literature review was performed to provide an up-to-date status of TL systems on medical images. Specifically, we proposed a novel conceptual framework to organize the review. Second, a novel DL network was pretrained on natural images and utilized to evaluate the effectiveness of TL on a very large medical image dataset, specifically Chest X-rays images. Lastly, domain adaptation using an autoencoder was evaluated on the medical image dataset and the results confirmed the effectiveness of TL through fine-tuning strategies. We make several contributions to TL systems on medical image analysis: Firstly, we present a novel survey of TL on medical images and propose a new conceptual framework to organize the findings. Secondly, we propose a novel DL architecture to improve learned representations of medical images while mitigating the problem of vanishing gradients. Additionally, we identified the optimal cut-off layer (OCL) that provided the best model performance. We found that the higher layers in the proposed deep model give a better feature representation of our medical image task. Finally, we analyzed the effect of domain adaptation by fine-tuning an autoencoder on our medical images and provide theoretical contributions on the application of the transductive TL approach. The contributions herein reveal several research gaps to motivate future research and contribute to the body of literature in this active research area of TL systems on medical image analysis
    corecore