106 research outputs found

    Development of a 3D model of clinically relevant microcalcifications

    Full text link
    A realistic 3D anthropomorphic software model of microcalcifications may serve as a useful tool to assess the performance of breast imaging applications through simulations. We present a method allowing to simulate visually realistic microcalcifications with large morphological variability. Principal component analysis (PCA) was used to analyze the shape of 281 biopsied microcalcifications imaged with a micro-CT. The PCA analysis requires the same number of shape components for each input microcalcification. Therefore, the voxel-based microcalcifications were converted to a surface mesh with same number of vertices using a marching cube algorithm. The vertices were registered using an iterative closest point algorithm and a simulated annealing algorithm. To evaluate the approach, input microcalcifications were reconstructed by progressively adding principal components. Input and reconstructed microcalcifications were visually and quantitatively compared. New microcalcifications were simulated using randomly sampled principal components determined from the PCA applied to the input microcalcifications, and their realism was appreciated through visual assessment. Preliminary results have shown that input microcalcifications can be reconstructed with high visual fidelity when using 62 principal components, representing 99.5% variance. For that condition, the average L2 norm and dice coefficient were respectively 10.5 μ\mum and 0.93. Newly generated microcalcifications with 62 principal components were found to be visually similar, while not identical, to input microcalcifications. The proposed PCA model of microcalcification shapes allows to successfully reconstruct input microcalcifications and to generate new visually realistic microcalcifications with various morphologies

    The stochastic digital human is now enrolling for in silico imaging trials -- Methods and tools for generating digital cohorts

    Full text link
    Randomized clinical trials, while often viewed as the highest evidentiary bar by which to judge the quality of a medical intervention, are far from perfect. In silico imaging trials are computational studies that seek to ascertain the performance of a medical device by collecting this information entirely via computer simulations. The benefits of in silico trials for evaluating new technology include significant resource and time savings, minimization of subject risk, the ability to study devices that are not achievable in the physical world, allow for the rapid and effective investigation of new technologies and ensure representation from all relevant subgroups. To conduct in silico trials, digital representations of humans are needed. We review the latest developments in methods and tools for obtaining digital humans for in silico imaging studies. First, we introduce terminology and a classification of digital human models. Second, we survey available methodologies for generating digital humans with healthy and diseased status and examine briefly the role of augmentation methods. Finally, we discuss the trade-offs of four approaches for sampling digital cohorts and the associated potential for study bias with selecting specific patient distributions

    Simulation of Brain Resection for Cavity Segmentation Using Self-Supervised and Semi-Supervised Learning

    Get PDF
    Resective surgery may be curative for drug-resistant focal epilepsy, but only 40% to 70% of patients achieve seizure freedom after surgery. Retrospective quantitative analysis could elucidate patterns in resected structures and patient outcomes to improve resective surgery. However, the resection cavity must first be segmented on the postoperative MR image. Convolutional neural networks (CNNs) are the state-of-the-art image segmentation technique, but require large amounts of annotated data for training. Annotation of medical images is a time-consuming process requiring highly-trained raters, and often suffering from high inter-rater variability. Self-supervised learning can be used to generate training instances from unlabeled data. We developed an algorithm to simulate resections on preoperative MR images. We curated a new dataset, EPISURG, comprising 431 postoperative and 269 preoperative MR images from 431 patients who underwent resective surgery. In addition to EPISURG, we used three public datasets comprising 1813 preoperative MR images for training. We trained a 3D CNN on artificially resected images created on the fly during training, using images from 1) EPISURG, 2) public datasets and 3) both. To evaluate trained models, we calculate Dice score (DSC) between model segmentations and 200 manual annotations performed by three human raters. The model trained on data with manual annotations obtained a median (interquartile range) DSC of 65.3 (30.6). The DSC of our best-performing model, trained with no manual annotations, is 81.7 (14.2). For comparison, inter-rater agreement between human annotators was 84.0 (9.9). We demonstrate a training method for CNNs using simulated resection cavities that can accurately segment real resection cavities, without manual annotations

    APPLICATION OF DEEP LEARNING TO OPTIMIZE COMPUTER-AIDED-DETECTION AND DIAGNOSIS OF MEDICAL IMAGES

    Get PDF
    The field of medical imaging informatics has experienced significant advancements with the integration of artificial intelligence (AI), especially in tasks like detecting abnormalities in retinal fundus images. This dissertation focuses on four interrelated research contributions that address crucial aspects of AI in medical imaging, offering a comprehensive overview of various innovative approaches and methodologies. The first contribution involves developing a two-stage deep learning model. This model significantly improves the accuracy of identifying high-quality retinal fundus images by eliminating those with severe artifacts. It highlights the critical role of an optimal training dataset in enhancing the performance of deep learning models. The second contribution presents an innovative algorithm for synthetic data generation. This algorithm enhances the effectiveness of deep learning models in medical image analysis by augmenting datasets with synthesized annotated diseased regions onto disease-free images, leading to notable improvements in disease classification accuracy. The third contribution is centered around a novel joint deep-learning model for medical image segmentation and classification. Combining a U-net architecture with an image classification model it demonstrates substantial accuracy improvements as the training dataset size increases. Lastly, a comparative analysis is conducted between radionics-based and deep transfer learning-based Computer-Aided Detection (CAD) schemes for classifying breast lesions in digital mammograms. The findings reveal the superiority of deep transfer learning methods in achieving higher classification accuracy. Collectively, these contributions offer valuable insights and practical methodologies for enhancing the efficiency and diagnostic accuracy of AI applications in medical imaging, marking a significant step forward in this rapidly evolving field

    Simulation of Brain Resection for Cavity Segmentation Using Self-Supervised and Semi-Supervised Learning

    Get PDF
    Resective surgery may be curative for drug-resistant focal epilepsy, but only 40% to 70% of patients achieve seizure freedom after surgery. Retrospective quantitative analysis could elucidate patterns in resected structures and patient outcomes to improve resective surgery. However, the resection cavity must first be segmented on the postoperative MR image. Convolutional neural networks (CNNs) are the state-of-the-art image segmentation technique, but require large amounts of annotated data for training. Annotation of medical images is a time-consuming process requiring highly-trained raters, and often suffering from high inter-rater variability. Self-supervised learning can be used to generate training instances from unlabeled data. We developed an algorithm to simulate resections on preoperative MR images. We curated a new dataset, EPISURG, comprising 431 postoperative and 269 preoperative MR images from 431 patients who underwent resective surgery. In addition to EPISURG, we used three public datasets comprising 1813 preoperative MR images for training. We trained a 3D CNN on artificially resected images created on the fly during training, using images from 1) EPISURG, 2) public datasets and 3) both. To evaluate trained models, we calculate Dice score (DSC) between model segmentations and 200 manual annotations performed by three human raters. The model trained on data with manual annotations obtained a median (interquartile range) DSC of 65.3 (30.6). The DSC of our best-performing model, trained with no manual annotations, is 81.7 (14.2). For comparison, inter-rater agreement between human annotators was 84.0 (9.9). We demonstrate a training method for CNNs using simulated resection cavities that can accurately segment real resection cavities, without manual annotations.Comment: 13 pages, 6 figures, accepted at the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 202

    Developing and Applying Hybrid Deep Learning Models for Computer-Aided Diagnosis of Medical Image Data

    Get PDF
    The dissertation discusses three methods to address the challenges of applying deep learning models to medical imaging. The first method involves the development of a new joint deep learning model, J-Net, to achieve lesion segmentation and classification simultaneously. The J-Net model outperforms the individual models in accuracy with small datasets. The second method performs automatic image detection using a two-stage deep learning model to produce clean data. The third method involves developing multi-stage deep learning algorithms to generate synthetic medical image data, which can be used to overcome the lack of large, diverse datasets. These methods demonstrate that building enhanced training datasets can play a vital role in improving the performance of deep-learning models in medical imaging applications

    Image Augmentation Techniques for Mammogram Analysis

    Get PDF
    Research in the medical imaging field using deep learning approaches has become progressively contingent. Scientific findings reveal that supervised deep learning methods’ performance heavily depends on training set size, which expert radiologists must manually annotate. The latter is quite a tiring and time-consuming task. Therefore, most of the freely accessible biomedical image datasets are small-sized. Furthermore, it is challenging to have big-sized medical image datasets due to privacy and legal issues. Consequently, not a small number of supervised deep learning models are prone to overfitting and cannot produce generalized output. One of the most popular methods to mitigate the issue above goes under the name of data augmentation. This technique helps increase training set size by utilizing various transformations and has been publicized to improve the model performance when tested on new data. This article surveyed different data augmentation techniques employed on mammogram images. The study aims to provide insights into augmentation and deep learning-based augmentation techniques

    Deep learning to find colorectal polyps in colonoscopy: A systematic literature review

    Get PDF
    Colorectal cancer has a great incidence rate worldwide, but its early detection significantly increases the survival rate. Colonoscopy is the gold standard procedure for diagnosis and removal of colorectal lesions with potential to evolve into cancer and computer-aided detection systems can help gastroenterologists to increase the adenoma detection rate, one of the main indicators for colonoscopy quality and predictor for colorectal cancer prevention. The recent success of deep learning approaches in computer vision has also reached this field and has boosted the number of proposed methods for polyp detection, localization and segmentation. Through a systematic search, 35 works have been retrieved. The current systematic review provides an analysis of these methods, stating advantages and disadvantages for the different categories used; comments seven publicly available datasets of colonoscopy images; analyses the metrics used for reporting and identifies future challenges and recommendations. Convolutional neural networks are the most used architecture together with an important presence of data augmentation strategies, mainly based on image transformations and the use of patches. End-to-end methods are preferred over hybrid methods, with a rising tendency. As for detection and localization tasks, the most used metric for reporting is the recall, while Intersection over Union is highly used in segmentation. One of the major concerns is the difficulty for a fair comparison and reproducibility of methods. Even despite the organization of challenges, there is still a need for a common validation framework based on a large, annotated and publicly available database, which also includes the most convenient metrics to report results. Finally, it is also important to highlight that efforts should be focused in the future on proving the clinical value of the deep learning based methods, by increasing the adenoma detection rate.This work was partially supported by PICCOLO project. This project has received funding from the European Union's Horizon2020 Research and Innovation Programme under grant agreement No. 732111. The sole responsibility of this publication lies with the author. The European Union is not responsible for any use that may be made of the information contained therein. The authors would also like to thank Dr. Federico Soria for his support on this manuscript and Dr. José Carlos Marín, from Hospital 12 de Octubre, and Dr. Ángel Calderón and Dr. Francisco Polo, from Hospital de Basurto, for the images in Fig. 4

    Deep learning in medical imaging and radiation therapy

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd

    A Textbook of Advanced Oral and Maxillofacial Surgery

    Get PDF
    The scope of OMF surgery has expanded; encompassing treatment of diseases, disorders, defects and injuries of the head, face, jaws and oral cavity. This internationally-recognized specialty is evolving with advancements in technology and instrumentation. Specialists of this discipline treat patients with impacted teeth, facial pain, misaligned jaws, facial trauma, oral cancer, cysts and tumors; they also perform facial cosmetic surgery and place dental implants. The contents of this volume essentially complements the volume 1; with chapters that cover both basic and advanced concepts on complex topics in oral and maxillofacial surgery
    corecore