8,075 research outputs found

    Segmentation of Pathology Images: A Deep Learning Strategy with Annotated Data

    Get PDF
    Cancer has significantly threatened human life and health for many years. In the clinic, histopathology image segmentation is the golden stand for evaluating the prediction of patient prognosis and treatment outcome. Generally, manually labelling tumour regions in hundreds of high-resolution histopathological images is time-consuming and expensive for pathologists. Recently, the advancements in hardware and computer vision have allowed deep-learning-based methods to become mainstream to segment tumours automatically, significantly reducing the workload of pathologists. However, most current methods rely on large-scale labelled histopathological images. Therefore, this research studies label-effective tumour segmentation methods using deep-learning paradigms to relieve the annotation limitations. Chapter 3 proposes an ensemble framework for fully-supervised tumour segmentation. Usually, the performance of an individual-trained network is limited by significant morphological variances in histopathological images. We propose a fully-supervised learning ensemble fusion model that uses both shallow and deep U-Nets, trained with images of different resolutions and subsets of images, for robust predictions of tumour regions. Noise elimination is achieved with Convolutional Conditional Random Fields. Two open datasets are used to evaluate the proposed method: the ACDC@LungHP challenge at ISBI2019 and the DigestPath challenge at MICCAI2019. With a dice coefficient of 79.7 %, the proposed method takes third place in ACDC@LungHP. In DigestPath 2019, the proposed method achieves a dice coefficient 77.3 %. Well-annotated images are an indispensable part of training fully-supervised segmentation strategies. However, large-scale histopathology images are hardly annotated finely in clinical practice. It is common for labels to be of poor quality or for only a few images to be manually marked by experts. Consequently, fully-supervised methods cannot perform well in these cases. Chapter 4 proposes a self-supervised contrast learning for tumour segmentation. A self-supervised cancer segmentation framework is proposed to reduce label dependency. An innovative contrastive learning scheme is developed to represent tumour features based on unlabelled images. Unlike a normal U-Net, the backbone is a patch-based segmentation network. Additionally, data augmentation and contrastive losses are applied to improve the discriminability of tumour features. A convolutional Conditional Random Field is used to smooth and eliminate noise. Three labelled, and fourteen unlabelled images are collected from a private skin cancer dataset called BSS. Experimental results show that the proposed method achieves better tumour segmentation performance than other popular self-supervised methods. However, by evaluated on the same public dataset as chapter 3, the proposed self-supervised method is hard to handle fine-grained segmentation around tumour boundaries compared to the supervised method we proposed. Chapter 5 proposes a sketch-based weakly-supervised tumour segmentation method. To segment tumour regions precisely with coarse annotations, a sketch-supervised method is proposed, containing a dual CNN-Transformer network and a global normalised class activation map. CNN-Transformer networks simultaneously model global and local tumour features. With the global normalised class activation map, a gradient-based tumour representation can be obtained from the dual network predictions. We invited experts to mark fine and coarse annotations in the private BSS and the public PAIP2019 datasets to facilitate reproducible performance comparisons. Using the BSS dataset, the proposed method achieves 76.686 % IOU and 86.6 % Dice scores, outperforming state-of-the-art methods. Additionally, the proposed method achieves a Dice gain of 8.372 % compared with U-Net on the PAIP2019 dataset. The thesis presents three approaches to segmenting cancers from histology images: fully-supervised, unsupervised, and weakly supervised methods. This research effectively segments tumour regions based on histopathological annotations and well-designed modules. Our studies comprehensively demonstrate label-effective automatic histopathological image segmentation. Experimental results prove that our works achieve state-of-the-art segmentation performances on private and public datasets. In the future, we plan to integrate more tumour feature representation technologies with other medical modalities and apply them to clinical research

    Segment Anything Model (SAM) for Radiation Oncology

    Full text link
    In this study, we evaluate the performance of the Segment Anything Model (SAM) model in clinical radiotherapy. We collected real clinical cases from four regions at the Mayo Clinic: prostate, lung, gastrointestinal, and head \& neck, which are typical treatment sites in radiation oncology. For each case, we selected the OARs of concern in radiotherapy planning and compared the Dice and Jaccard outcomes between clinical manual delineation, automatic segmentation using SAM's "segment anything" mode, and automatic segmentation using SAM with box prompt. Our results indicate that SAM performs better in automatic segmentation for the prostate and lung regions, while its performance in the gastrointestinal and head \& neck regions was relatively inferior. When considering the size of the organ and the clarity of its boundary, SAM displays better performance for larger organs with clear boundaries, such as the lung and liver, and worse for smaller organs with unclear boundaries, like the parotid and cochlea. These findings align with the generally accepted variations in difficulty level associated with manual delineation of different organs at different sites in clinical radiotherapy. Given that SAM, a single trained model, could handle the delineation of OARs in four regions, these results also demonstrate SAM's robust generalization capabilities in automatic segmentation for radiotherapy, i.e., achieving delineation of different radiotherapy OARs using a generic automatic segmentation model. SAM's generalization capabilities across different regions make it technically feasible to develop a generic model for automatic segmentation in radiotherapy

    Is attention all you need in medical image analysis? A review

    Full text link
    Medical imaging is a key component in clinical diagnosis, treatment planning and clinical trial design, accounting for almost 90% of all healthcare data. CNNs achieved performance gains in medical image analysis (MIA) over the last years. CNNs can efficiently model local pixel interactions and be trained on small-scale MI data. The main disadvantage of typical CNN models is that they ignore global pixel relationships within images, which limits their generalisation ability to understand out-of-distribution data with different 'global' information. The recent progress of Artificial Intelligence gave rise to Transformers, which can learn global relationships from data. However, full Transformer models need to be trained on large-scale data and involve tremendous computational complexity. Attention and Transformer compartments (Transf/Attention) which can well maintain properties for modelling global relationships, have been proposed as lighter alternatives of full Transformers. Recently, there is an increasing trend to co-pollinate complementary local-global properties from CNN and Transf/Attention architectures, which led to a new era of hybrid models. The past years have witnessed substantial growth in hybrid CNN-Transf/Attention models across diverse MIA problems. In this systematic review, we survey existing hybrid CNN-Transf/Attention models, review and unravel key architectural designs, analyse breakthroughs, and evaluate current and future opportunities as well as challenges. We also introduced a comprehensive analysis framework on generalisation opportunities of scientific and clinical impact, based on which new data-driven domain generalisation and adaptation methods can be stimulated

    Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives

    Full text link
    Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.Comment: Under Revie

    Novel 129Xe Magnetic Resonance Imaging and Spectroscopy Measurements of Pulmonary Gas-Exchange

    Get PDF
    Gas-exchange is the primary function of the lungs and involves removing carbon dioxide from the body and exchanging it within the alveoli for inhaled oxygen. Several different pulmonary, cardiac and cardiovascular abnormalities have negative effects on pulmonary gas-exchange. Unfortunately, clinical tests do not always pinpoint the problem; sensitive and specific measurements are needed to probe the individual components participating in gas-exchange for a better understanding of pathophysiology, disease progression and response to therapy. In vivo Xenon-129 gas-exchange magnetic resonance imaging (129Xe gas-exchange MRI) has the potential to overcome these challenges. When participants inhale hyperpolarized 129Xe gas, it has different MR spectral properties as a gas, as it diffuses through the alveolar membrane and as it binds to red-blood-cells. 129Xe MR spectroscopy and imaging provides a way to tease out the different anatomic components of gas-exchange simultaneously and provides spatial information about where abnormalities may occur. In this thesis, I developed and applied 129Xe MR spectroscopy and imaging to measure gas-exchange in the lungs alongside other clinical and imaging measurements. I measured 129Xe gas-exchange in asymptomatic congenital heart disease and in prospective, controlled studies of long-COVID. I also developed mathematical tools to model 129Xe MR signals during acquisition and reconstruction. The insights gained from my work underscore the potential for 129Xe gas-exchange MRI biomarkers towards a better understanding of cardiopulmonary disease. My work also provides a way to generate a deeper imaging and physiologic understanding of gas-exchange in vivo in healthy participants and patients with chronic lung and heart disease

    Volumetric memory network for interactive medical image segmentation

    Full text link
    Despite recent progress of automatic medical image segmentation techniques, fully automatic results usually fail to meet clinically acceptable accuracy, thus typically require further refinement. To this end, we propose a novel Volumetric Memory Network, dubbed as VMN, to enable segmentation of 3D medical images in an interactive manner. Provided by user hints on an arbitrary slice, a 2D interaction network is firstly employed to produce an initial 2D segmentation for the chosen slice. Then, the VMN propagates the initial segmentation mask bidirectionally to all slices of the entire volume. Subsequent refinement based on additional user guidance on other slices can be incorporated in the same manner. To facilitate smooth human-in-the-loop segmentation, a quality assessment module is introduced to suggest the next slice for interaction based on the segmentation quality of each slice produced in the previous round. Our VMN demonstrates two distinctive features: First, the memory-augmented network design offers our model the ability to quickly encode past segmentation information, which will be retrieved later for the segmentation of other slices; Second, the quality assessment module enables the model to directly estimate the quality of each segmentation prediction, which allows for an active learning paradigm where users preferentially label the lowest-quality slice for multi-round refinement. The proposed network leads to a robust interactive segmentation engine, which can generalize well to various types of user annotations (e.g., scribble, bounding box, extreme clicking). Extensive experiments have been conducted on three public medical image segmentation datasets (i.e., MSD, KiTS19, CVC-ClinicDB), and the results clearly confirm the superiority of our approach in comparison with state-of-the-art segmentation models. The code is made publicly available at https://github.com/0liliulei/Mem3D

    Antimicrobial Peptides Aka Host Defense Peptides – From Basic Research to Therapy

    Get PDF
    This Special Issue reprint will address the most current and innovative developments in the field of HDP research across a range of topics, such as structure and function analysis, modes of action, anti-microbial effects, cell and animal model systems, the discovery of novel host-defense peptides, and drug development

    Multimodality Imaging in Prostate Cancer

    Get PDF
    ABSTRACT Prostate cancer is the most common cancer in men in Finland. Its aggressiveness varies widely, from indolent to fatal disease. Accurate characterization of prostate cancer is extremely essential to prevent overtreatment while sustaining good survivorship and high quality of life. This is feasible using novel technology in imaging and automatic tools in treatment planning. In the first part of this thesis work, the aim was to evaluate anti-1-amino-3-18Ffluorocyclobutane-1-carboxylic acid (18F-FACBC) PET/CT, PET/MRI, and multiparametric MRI (mpMRI) in detection of primary prostate cancer. The uptake of 18F-FACBC was significantly stronger in tumors with higher Gleason score and it may therefore assist in targeted biopsies when combined with MRI. 18F-FACBC PET/MRI outperformed PET/CT but did not demonstrate higher diagnostic performance than mpMRI performed separately. Furthermore, PET/MRI and mpMRI failed to detect pelvic lymph node metastasis measuring less than 8mm. 18F-FACBC PET/MRI is promising in characterization of primary prostate cancer, especially if ablative treatments are planned. It is not likely to replace mpMRI in clinical practice. The second study assessed multimodality imaging in detecting bone metastasis in high-risk prostate cancer and breast cancer patients. All patients underwent 99mTc-HDP bone scintigraphy (BS), 99mTc-HDP SPECT, 99mTc-HDP SPECT/CT, 18F-NaF PET/CT, and whole body (wb) MRI+DWI. 99mTc-HDP SPECT/CT, 18F-NaF PET/CT, and wbMRI+DWI had superior sensitivity compared to conventional nuclear imaging. In particular non-BS techniques showed less equivocal findings. wbMRI+DWI was as accurate as 18F-NaF PET/CT for detecting bone metastasis and may be considered a potential “single-step” imaging modality for detection of bone metastasis in high-risk patients with prostate and breast cancer. The purpose of the third study was to evaluate and validate the performance of a fully automated segmentation tool (AST) in MRI-based radiotherapy planning of prostate cancer. It showed high agreement for delineating prostate, bladder, and rectum, compared to manual contouring, and suggested adoption of AST in clinical practice. Finally, the fourth study investigated the long-term toxicity after biologically guided radiotherapy in men with localized prostate cancer. Carbon-11 acetate (11C-ACE) PET-CT was used to guide dose escalation into metabolically active intraprostatic lesions. 11C-ACE PET-guided radiotherapy was feasible and well tolerated. Although erectile dysfunction was relatively common, severe gastro-intestinal symptoms were very rare, and no grade 3 genitourinary symptoms were present at five years after radiotherapy. The findings of this thesis have potential to improve diagnostic imaging and radiotherapy planning in primary and metastatic prostate cancer. Eventually, they are likely to improve patients’ quality of life and survival. KEYWORDS: prostate cancer, magnetic resonance imaging, positron emission tomography, radiotherapy planning, toxicity, bone metastasisTIIVISTELMÄ Eturauhassyöpä on miesten yleisin syöpä Suomessa. Sen taudinkuva vaihtelee laajasti rauhallisesta aggressiiviseen ja tappavaan. On oleellista, että taudin luonne arvioidaan tarkasti, jotta vältytään sen liialliselta hoidolta, tinkimättä erinomaisista hoitotuloksista selviytymisessä ja elämän laadussa. Uudet kuvantamisteknologiat ja automaattityökalut mahdollistavat tämän. Tämän väitöskirjan ensimmäisessä osatyössä oli tavoitteena arvioida anti-1-amino-3-18Ffluorosyklobutaani-1-karboksyylihappo (18F-FACBC) PET-tietokonetomografiaa (TT), PET-magneettiresonanssikuvantamista (MRI) ja multiparametrista MRI-kuvantamista (mpMRI) eturauhassyövän diagnoosivaiheessa. 18F-FACBC-kertymät olivat tilastollisesti merkitsevästi voimakkaampia korkean Gleason-luokituksen kasvaimissa, joten yhdistettyä PET-MRI-kuvantamista voidaan käyttää hyväksi esimerkiksi kohdennetussa koepalojen otossa. 18F-FACBC PET-MRI oli parempi kuin PET-TT ja samanveroinen kuin mpMRI eturauhassyövän diagnostiikassa. PET-MRI ja mpMRI eivät havainneet alle 8 mm:n läpimittaisia imusolmukemetastaaseja. 18F-FACBC PET-MRI on lupaava kuvantamismuoto eturauhassyövän diagnostiikassa, erityisesti kajoavia hoitoja suunniteltaessa, mutta ei korvanne mpMRI:a kliinisessä käytössä. Toinen osatyö käsitteli luustoetäpesäkkeiden toteamista eri kuvantamismenetelmillä korkean uusiutumisriskin eturauhas- ja rintasyöpäpotilailla. Kaikille potilaille tehtiin 99mTc-HDP luustokarttakuvaus, 99mTc-HDP SPECT, 99mTc-HDP SPECT-TT, 18F-NaF PET-TT ja koko kehon MRI diffuusiopainotettuna (wbMRI+DWI). 99mTc-HDP SPECT-TT, 18F-NaF PET-TT ja wbMRI+DWI olivat perinteistä luustokarttaa herkempiä luustometastaasien toteamisessa, koska epäspesifeiksi määriteltyjä muutoksia oli vähemmän. wbMRI+DWI osoitti yhtäläistä tarkkuutta luustometastaasien diagnosoinnissa 18F-NaF PET-TT:n verrattuna, joten sitä voitaisiin hyödyntää, käytettäessä vain yhtä kuvantamistapaa näiden potilaiden luustometastaasien toteamiseen. Kolmas osatyö arvioi ja validoi täysin automaattisen piirtotyökalun käyttöä MRI-pohjaisen sädehoidon suunnittelussa eturauhassyöpäpotilailla. Työkalu suoriutui hyvin eturauhasen, virtsarakon ja peräsuolen rajauksesta asiantuntijan käsin tekemiin rajauksiin verrattuna, puoltaen työkalun käyttöä luotettavasti myös kliinisessä työssä. Viimeisenä, neljännessä osatyössä arvioitiin biologisesti ohjatun eturauhassyövän sädehoidon aiheuttamia pitkäaikaishaittoja. Hiili-11 asetaatti (11C-ACE) PET-TT-kuvantamisen avulla suunniteltiin sädehoito, jossa metabolisesti aktiivisiin eturauhasen sisäisiin muutoksiin kohdistettiin korkeammat sädeannokset. 11C-ACE-PET-TT-ohjattu sädehoito oli toteuttamiskelpoinen ja hyvin siedetty. Vaikka erektiohäiriöt olivat suhteellisen yleisiä, vakavat suoliston haittavaikutukset olivat hyvin harvinaisia, eikä kolmannen asteen virtsateiden haittavaikutuksia esiintynyt lainkaan viiden vuoden kuluttua sädehoidosta. Tämän väitöskirjan löydökset voivat parantaa eturauhassyövän primaaridiagnostiikan kuvantamista ja sädehoidon suunnittelua, sekä luustoetäpesäkkeiden diagnostiikkaa. Näin voidaan kohentaa potilaiden elämänlaatua ja selviytymistä. AVAINSANAT: Eturauhassyöpä, magneettikuvaus, positroniemissiotomografia, sädehoidon suunnittelu, haittavaikutukset, luuston etäpesäkkee

    A new biomarker combining multimodal MRI radiomics and clinical indicators for differentiating inverted papilloma from nasal polyp invaded the olfactory nerve possibly

    Get PDF
    Background and purposeInverted papilloma (IP) and nasal polyp (NP), as two benign lesions, are difficult to distinguish on MRI imaging and clinically, especially in predicting whether the olfactory nerve is damaged, which is an important aspect of treatment and prognosis. We plan to establish a new biomarker to distinguish IP and NP that may invade the olfactory nerve, and to analyze its diagnostic efficacy.Materials and methodsA total of 74 cases of IP and 55 cases of NP were collected. A total of 80% of 129 patients were used as the training set (59 IP and 44 NP); the remaining were used as the testing set. As a multimodal study (two MRI sequences and clinical indicators), preoperative MR images including T2-weighted magnetic resonance imaging (T2-WI) and contrast-enhanced T1-weighted magnetic resonance imaging (CE-T1WI) were collected. Radiomic features were extracted from MR images. Then, the least absolute shrinkage and selection operator (LASSO) regression method was used to decrease the high degree of redundancy and irrelevance. Subsequently, the radiomics model is constructed by the rad scoring formula. The area under the curve (AUC), accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of the model have been calculated. Finally, the decision curve analysis (DCA) is used to evaluate the clinical practicability of the model.ResultsThere were significant differences in age, nasal bleeding, and hyposmia between the two lesions (p < 0.05). In total, 1,906 radiomic features were extracted from T2-WI and CE-T1WI images. After feature selection, using 12 key features to bulid model. AUC, sensitivity, specificity, and accuracy on the testing cohort of the optimal model were, respectively, 0.9121, 0.828, 0.9091, and 0.899. AUC on the testing cohort of the optimal model was 0.9121; in addition, sensitivity, specificity, and accuracy were, respectively, 0.828, 0.9091, and 0.899.ConclusionA new biomarker combining multimodal MRI radiomics and clinical indicators can effectively distinguish between IP and NP that may invade the olfactory nerve, which can provide a valuable decision basis for individualized treatment

    Learning strategies for improving neural networks for image segmentation under class imbalance

    Get PDF
    This thesis aims to improve convolutional neural networks (CNNs) for image segmentation under class imbalance, which is referred to the problem of training dataset when the class distributions are unequal. We particularly focus on medical image segmentation because of its imbalanced nature and clinical importance. Based on our observations of model behaviour, we argue that CNNs cannot generalize well on imbalanced segmentation tasks, mainly because of two counterintuitive reasons. CNNs are prone to overfit the under-represented foreground classes as it would memorize the regions of interest (ROIs) in the training data because they are so rare. Besides, CNNs could underfit the heterogenous background classes as it is difficult to learn from the samples with diverse and complex characteristics. Those behaviours of CNNs are not limited to specific loss functions. To address those limitations, firstly we propose novel asymmetric variants of popular loss functions and regularization techniques, which are explicitly designed to increase the variance of foreground samples to counter overfitting under class imbalance. Secondly we propose context label learning (CoLab) to tackle background underfitting by automatically decomposing the background class into several subclasses. This is achieved by optimizing an auxiliary task generator to generate context labels such that the main network will produce good ROIs segmentation performance. Then we propose a meta-learning based automatic data augmentation framework which builds a balance of foreground and background samples to alleviate class imbalance. Specifically, we learn class-specific training-time data augmentation (TRA) and jointly optimize TRA and test-time data augmentation (TEA) effectively aligning training and test data distribution for better generalization. Finally, we explore how to estimate model performance under domain shifts when trained with imbalanced dataset. We propose class-specific variants of existing confidence-based model evaluation methods which adapts separate parameters per class, enabling class-wise calibration to reduce model bias towards the minority classes.Open Acces
    corecore