293 research outputs found

    End to End Colonic Content Assessment: ColonMetry Application

    Get PDF
    Colon segmentation; Colonic content; Intestinal gasSegmentación de colon; Contenido colónico; Gas intestinalSegmentació del còlon; Contingut colònic; Gas intestinalThe analysis of colonic contents is a valuable tool for the gastroenterologist and has multiple applications in clinical routine. When considering magnetic resonance imaging (MRI) modalities, T2 weighted images are capable of segmenting the colonic lumen, whereas fecal and gas contents can only be distinguished in T1 weighted images. In this paper, we present an end-to-end quasi-automatic framework that comprises all the steps needed to accurately segment the colon in T2 and T1 images and to extract colonic content and morphology data to provide the quantification of colonic content and morphology data. As a consequence, physicians have gained new insights into the effects of diets and the mechanisms of abdominal distension.This work was supported by the Spanish Ministry of Science and Innovation (Proyectos de Generación de Conocimiento), PID2021-122295OB-I00, and Agencia Estatal de Investigación and Fondos FEDER, PID2021-122136OB-C21); Ciberehd is funded by the Instituto de Salud Carlos III

    Challenges and Promises of Radiomics for Rectal Cancer

    Get PDF
    Moreira, J. M., Santiago, I., Santinha, J., Figueiredo, N., Marias, K., Figueiredo, M., ... Papanikolaou, N. (2019). Challenges and Promises of Radiomics for Rectal Cancer. Current Colorectal Cancer Reports, 15(6), 175-180. https://doi.org/10.1007/s11888-019-00446-yPurpose of Review: This literature review aims to gather the relevant works published on the topic of Radiomics in Rectal Cancer. Research on this topic has focused on finding predictors of rectal cancer staging and chemoradiation treatment response from medical images. The methods presented may, in principle, aid clinicians with the appropriate treatment planning options. Finding appropriate automatic tools to help in this task is very important, since rectal cancer has been considered one of the most challenging oncological pathologies in recent years. Recent Findings: Radiomics is a class of methods based on the extraction of mineable, high-dimensional data/features from the routine, standard-of-care medical imaging. This data is then fed to machine learning algorithms, with the goal of automatically obtaining predictions regarding disease stage and therapeutic response. Summary: The literature reviewed suggests that Radiomics will continue to be a part of the body of research in oncology in the upcoming years. However, and excluding very few studies, proper validation on the performance of the methods (mainly with external datasets) is still one of the main limitations of the field, which strongly limits their clinical applicability. Progress will only occur if the community opens itself to collaborate with different groups, as data availability and limited shareability continues to be the barrier for its development. Nowadays, Radiomics is used for nearly every type of cancer. In particular, for rectal cancer, the need for predicting treatment response will continue to demand and boost research in this field.authorsversionpublishe

    Segment Anything Model for Medical Images?

    Full text link
    The Segment Anything Model (SAM) is the first foundation model for general image segmentation. It designed a novel promotable segmentation task, ensuring zero-shot image segmentation using the pre-trained model via two main modes including automatic everything and manual prompt. SAM has achieved impressive results on various natural image segmentation tasks. However, medical image segmentation (MIS) is more challenging due to the complex modalities, fine anatomical structures, uncertain and complex object boundaries, and wide-range object scales. SAM has achieved impressive results on various natural image segmentation tasks. Meanwhile, zero-shot and efficient MIS can well reduce the annotation time and boost the development of medical image analysis. Hence, SAM seems to be a potential tool and its performance on large medical datasets should be further validated. We collected and sorted 52 open-source datasets, and build a large medical segmentation dataset with 16 modalities, 68 objects, and 553K slices. We conducted a comprehensive analysis of different SAM testing strategies on the so-called COSMOS 553K dataset. Extensive experiments validate that SAM performs better with manual hints like points and boxes for object perception in medical images, leading to better performance in prompt mode compared to everything mode. Additionally, SAM shows remarkable performance in some specific objects and modalities, but is imperfect or even totally fails in other situations. Finally, we analyze the influence of different factors (e.g., the Fourier-based boundary complexity and size of the segmented objects) on SAM's segmentation performance. Extensive experiments validate that SAM's zero-shot segmentation capability is not sufficient to ensure its direct application to the MIS.Comment: 23 pages, 14 figures, 12 table

    Radiomics analyses for outcome prediction in patients with locally advanced rectal cancer and glioblastoma multiforme using multimodal imaging data

    Get PDF
    Personalized treatment strategies for oncological patient management can improve outcomes of patient populations with heterogeneous treatment response. The implementation of such a concept requires the identification of biomarkers that can precisely predict treatment outcome. In the context of this thesis, we develop and validate biomarkers from multimodal imaging data for the outcome prediction after treatment in patients with locally advanced rectal cancer (LARC) and in patients with newly diagnosed glioblastoma multiforme (GBM), using conventional feature-based radiomics and deep-learning (DL) based radiomics. For LARC patients, we identify promising radiomics signatures combining computed tomography (CT) and T2-weighted (T2-w) magnetic resonance imaging (MRI) with clinical parameters to predict tumour response to neoadjuvant chemoradiotherapy (nCRT). Further, the analyses of externally available radiomics models for LARC reveal a lack of reproducibility and the need for standardization of the radiomics process. For patients with GBM, we use postoperative [11C] methionine positron emission tomography (MET-PET) and gadolinium-enhanced T1-w MRI for the detection of the residual tumour status and to prognosticate time-to-recurrence (TTR) and overall survival (OS). We show that DL models built on MET-PET have an improved diagnostic and prognostic value as compared to MRI

    Disruptive Autoencoders: Leveraging Low-level features for 3D Medical Image Pre-training

    Full text link
    Harnessing the power of pre-training on large-scale datasets like ImageNet forms a fundamental building block for the progress of representation learning-driven solutions in computer vision. Medical images are inherently different from natural images as they are acquired in the form of many modalities (CT, MR, PET, Ultrasound etc.) and contain granulated information like tissue, lesion, organs etc. These characteristics of medical images require special attention towards learning features representative of local context. In this work, we focus on designing an effective pre-training framework for 3D radiology images. First, we propose a new masking strategy called local masking where the masking is performed across channel embeddings instead of tokens to improve the learning of local feature representations. We combine this with classical low-level perturbations like adding noise and downsampling to further enable low-level representation learning. To this end, we introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations. Additionally, we also devise a cross-modal contrastive loss (CMCL) to accommodate the pre-training of multiple modalities in a single framework. We curate a large-scale dataset to enable pre-training of 3D medical radiology images (MRI and CT). The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance. Notably, our proposed method tops the public test leaderboard of BTCV multi-organ segmentation challenge.Comment: Preprin

    Weakly Supervised Learning with Automated Labels from Radiology Reports for Glioma Change Detection

    Full text link
    Gliomas are the most frequent primary brain tumors in adults. Glioma change detection aims at finding the relevant parts of the image that change over time. Although Deep Learning (DL) shows promising performances in similar change detection tasks, the creation of large annotated datasets represents a major bottleneck for supervised DL applications in radiology. To overcome this, we propose a combined use of weak labels (imprecise, but fast-to-create annotations) and Transfer Learning (TL). Specifically, we explore inductive TL, where source and target domains are identical, but tasks are different due to a label shift: our target labels are created manually by three radiologists, whereas our source weak labels are generated automatically from radiology reports via NLP. We frame knowledge transfer as hyperparameter optimization, thus avoiding heuristic choices that are frequent in related works. We investigate the relationship between model size and TL, comparing a low-capacity VGG with a higher-capacity ResNeXt model. We evaluate our models on 1693 T2-weighted magnetic resonance imaging difference maps created from 183 patients, by classifying them into stable or unstable according to tumor evolution. The weak labels extracted from radiology reports allowed us to increase dataset size more than 3-fold, and improve VGG classification results from 75% to 82% AUC. Mixed training from scratch led to higher performance than fine-tuning or feature extraction. To assess generalizability, we ran inference on an open dataset (BraTS-2015: 15 patients, 51 difference maps), reaching up to 76% AUC. Overall, results suggest that medical imaging problems may benefit from smaller models and different TL strategies with respect to computer vision datasets, and that report-generated weak labels are effective in improving model performances. Code, in-house dataset and BraTS labels are released.Comment: This work has been submitted as Original Paper to a Journa

    A robust framework for medical image segmentation through adaptable class-specific representation

    Get PDF
    Medical image segmentation is an increasingly important component in virtual pathology, diagnostic imaging and computer-assisted surgery. Better hardware for image acquisition and a variety of advanced visualisation methods have paved the way for the development of computer based tools for medical image analysis and interpretation. The routine use of medical imaging scans of multiple modalities has been growing over the last decades and data sets such as the Visible Human Project have introduced a new modality in the form of colour cryo section data. These developments have given rise to an increasing need for better automatic and semiautomatic segmentation methods. The work presented in this thesis concerns the development of a new framework for robust semi-automatic segmentation of medical imaging data of multiple modalities. Following the specification of a set of conceptual and technical requirements, the framework known as ACSR (Adaptable Class-Specific Representation) is developed in the first case for 2D colour cryo section segmentation. This is achieved through the development of a novel algorithm for adaptable class-specific sampling of point neighbourhoods, known as the PGA (Path Growing Algorithm), combined with Learning Vector Quantization. The framework is extended to accommodate 3D volume segmentation of cryo section data and subsequently segmentation of single and multi-channel greyscale MRl data. For the latter the issues of inhomogeneity and noise are specifically addressed. Evaluation is based on comparison with previously published results on standard simulated and real data sets, using visual presentation, ground truth comparison and human observer experiments. ACSR provides the user with a simple and intuitive visual initialisation process followed by a fully automatic segmentation. Results on both cryo section and MRI data compare favourably to existing methods, demonstrating robustness both to common artefacts and multiple user initialisations. Further developments into specific clinical applications are discussed in the future work section
    corecore