5,304 research outputs found

    A 3D Coarse-to-Fine Framework for Volumetric Medical Image Segmentation

    Full text link
    In this paper, we adopt 3D Convolutional Neural Networks to segment volumetric medical images. Although deep neural networks have been proven to be very effective on many 2D vision tasks, it is still challenging to apply them to 3D tasks due to the limited amount of annotated 3D data and limited computational resources. We propose a novel 3D-based coarse-to-fine framework to effectively and efficiently tackle these challenges. The proposed 3D-based framework outperforms the 2D counterpart to a large margin since it can leverage the rich spatial infor- mation along all three axes. We conduct experiments on two datasets which include healthy and pathological pancreases respectively, and achieve the current state-of-the-art in terms of Dice-S{\o}rensen Coefficient (DSC). On the NIH pancreas segmentation dataset, we outperform the previous best by an average of over 2%, and the worst case is improved by 7% to reach almost 70%, which indicates the reliability of our framework in clinical applications.Comment: 9 pages, 4 figures, Accepted to 3D

    A comparative study of different pre-trained deeplearning models and custom CNN for pancreatic tumor detection

    Get PDF
    Artificial Intelligence and its sub-branches like MachineLearning (ML) and Deep Learning (DL) applications have the potential to have positive effects that can directly affect human life. Medical imaging is briefly making the internal structure of the human body visible with various methods. With deep learning models, cancer detection, which is one of the most lethal diseases in the world, can be made possible with high accuracy. Pancreatic Tumor detection, which is one of the cancer types with the highest fatality rate, is one of the main targets of this project, together with the data set of computed tomography images,which is one of the medical imaging techniques and has an effective structure in Pancreatic Cancer imaging. In the field of image classification, which is a computer vision task, the transfer learning technique, which has gained popularity in recent years, has been applied quite frequently. Using pre-trained models werepreviously trained on a fairly large dataset and using them on medical images is common nowadays. The main objective of this article is to use this method, which is very popular inthe medical imaging field, in the detection of PDAC, one of the deadliest types of pancreatic cancer, and to investigate how it per-forms compared to the custom model created and trained from scratch. The pre-trained models which are used in this project areVGG-16 and ResNet, which are popular Convolutional Neutral Network models, for Pancreatic Tumor Detection task. With the use of these models, early diagnosis of pancreatic cancer, which progresses insidiously and therefore does not spread to neighboring tissues and organs when the treatment process is started, may be possible. Due to the abundance of medical images reviewed by medical professionals, which is one of the main causes for heavy workload of healthcare systems, this applicationcan assist radiologists and other specialists in Pancreatic Tumor detection by providing faster and more accurate method

    Feasibility of automated 3-dimensional magnetic resonance imaging pancreas segmentation.

    Get PDF
    PurposeWith the advent of MR guided radiotherapy, internal organ motion can be imaged simultaneously during treatment. In this study, we evaluate the feasibility of pancreas MRI segmentation using state-of-the-art segmentation methods.Methods and materialT2 weighted HASTE and T1 weighted VIBE images were acquired on 3 patients and 2 healthy volunteers for a total of 12 imaging volumes. A novel dictionary learning (DL) method was used to segment the pancreas and compared to t mean-shift merging (MSM), distance regularized level set (DRLS), graph cuts (GC) and the segmentation results were compared to manual contours using Dice's index (DI), Hausdorff distance and shift of the-center-of-the-organ (SHIFT).ResultsAll VIBE images were successfully segmented by at least one of the auto-segmentation method with DI >0.83 and SHIFT ≤2 mm using the best automated segmentation method. The automated segmentation error of HASTE images was significantly greater. DL is statistically superior to the other methods in Dice's overlapping index. For the Hausdorff distance and SHIFT measurement, DRLS and DL performed slightly superior to the GC method, and substantially superior to MSM. DL required least human supervision and was faster to compute.ConclusionOur study demonstrated potential feasibility of automated segmentation of the pancreas on MRI images with minimal human supervision at the beginning of imaging acquisition. The achieved accuracy is promising for organ localization

    Using Quantitative Imaging for Personalized Medicine in Pancreatic Cancer: A Review of Radiomics and Deep Learning Applications

    Get PDF
    As the most lethal major cancer, pancreatic cancer is a global healthcare challenge. Personalized medicine utilizing cutting-edge multi-omics data holds potential for major breakthroughs in tackling this critical problem. Radiomics and deep learning, two trendy quantitative imaging methods that take advantage of data science and modern medical imaging, have shown increasing promise in advancing the precision management of pancreatic cancer via diagnosing of precursor diseases, early detection, accurate diagnosis, and treatment personalization and optimization. Radiomics employs manually-crafted features, while deep learning applies computer-generated automatic features. These two methods aim to mine hidden information in medical images that is missed by conventional radiology and gain insights by systematically comparing the quantitative image information across different patients in order to characterize unique imaging phenotypes. Both methods have been studied and applied in various pancreatic cancer clinical applications. In this review, we begin with an introduction to the clinical problems and the technology. After providing technical overviews of the two methods, this review focuses on the current progress of clinical applications in precancerous lesion diagnosis, pancreatic cancer detection and diagnosis, prognosis prediction, treatment stratification, and radiogenomics. The limitations of current studies and methods are discussed, along with future directions. With better standardization and optimization of the workflow from image acquisition to analysis and with larger and especially prospective high-quality datasets, radiomics and deep learning methods could show real hope in the battle against pancreatic cancer through big data-based high-precision personalization

    Acquiring Weak Annotations for Tumor Localization in Temporal and Volumetric Data

    Full text link
    Creating large-scale and well-annotated datasets to train AI algorithms is crucial for automated tumor detection and localization. However, with limited resources, it is challenging to determine the best type of annotations when annotating massive amounts of unlabeled data. To address this issue, we focus on polyps in colonoscopy videos and pancreatic tumors in abdominal CT scans; both applications require significant effort and time for pixel-wise annotation due to the high dimensional nature of the data, involving either temporary or spatial dimensions. In this paper, we develop a new annotation strategy, termed Drag&Drop, which simplifies the annotation process to drag and drop. This annotation strategy is more efficient, particularly for temporal and volumetric imaging, than other types of weak annotations, such as per-pixel, bounding boxes, scribbles, ellipses, and points. Furthermore, to exploit our Drag&Drop annotations, we develop a novel weakly supervised learning method based on the watershed algorithm. Experimental results show that our method achieves better detection and localization performance than alternative weak annotations and, more importantly, achieves similar performance to that trained on detailed per-pixel annotations. Interestingly, we find that, with limited resources, allocating weak annotations from a diverse patient population can foster models more robust to unseen images than allocating per-pixel annotations for a small set of images. In summary, this research proposes an efficient annotation strategy for tumor detection and localization that is less accurate than per-pixel annotations but useful for creating large-scale datasets for screening tumors in various medical modalities.Comment: Published in Machine Intelligence Researc

    Tumor Segmentation and Classification Using Machine Learning Approaches

    Get PDF
    Medical image processing has recently developed progressively in terms of methodologies and applications to increase serviceability in health care management. Modern medical image processing employs various methods to diagnose tumors due to the burgeoning demand in the related industry. This study uses the PG-DBCWMF, the HV area method, and CTSIFT extraction to identify brain tumors that have been combined with pancreatic tumors. In terms of efficiency, precision, creativity, and other factors, these strategies offer improved performance in therapeutic settings. The three techniques, PG-DBCWMF, HV region algorithm, and CTSIFT extraction, are combined in the suggested method. The PG-DBCWMF (Patch Group Decision Couple Window Median Filter) works well in the preprocessing stage and eliminates noise. The HV region technique precisely calculates the vertical and horizontal angles of the known images. CTSIFT is a feature extraction method that recognizes the area of tumor images that is impacted. The brain tumor and pancreatic tumor databases, which produce the best PNSR, MSE, and other results, were used for the experimental evaluation

    Artificial Intelligence-based Motion Tracking in Cancer Radiotherapy: A Review

    Full text link
    Radiotherapy aims to deliver a prescribed dose to the tumor while sparing neighboring organs at risk (OARs). Increasingly complex treatment techniques such as volumetric modulated arc therapy (VMAT), stereotactic radiosurgery (SRS), stereotactic body radiotherapy (SBRT), and proton therapy have been developed to deliver doses more precisely to the target. While such technologies have improved dose delivery, the implementation of intra-fraction motion management to verify tumor position at the time of treatment has become increasingly relevant. Recently, artificial intelligence (AI) has demonstrated great potential for real-time tracking of tumors during treatment. However, AI-based motion management faces several challenges including bias in training data, poor transparency, difficult data collection, complex workflows and quality assurance, and limited sample sizes. This review serves to present the AI algorithms used for chest, abdomen, and pelvic tumor motion management/tracking for radiotherapy and provide a literature summary on the topic. We will also discuss the limitations of these algorithms and propose potential improvements.Comment: 36 pages, 5 Figures, 4 Table
    • …
    corecore