22 research outputs found

    Learning to Segment from Scribbles using Multi-scale Adversarial Attention Gates

    Get PDF
    Large, fine-grained image segmentation datasets, annotated at pixel-level, are difficult to obtain, particularly in medical imaging, where annotations also require expert knowledge. Weakly-supervised learning can train models by relying on weaker forms of annotation, such as scribbles. Here, we learn to segment using scribble annotations in an adversarial game. With unpaired segmentation masks, we train a multi-scale GAN to generate realistic segmentation masks at multiple resolutions, while we use scribbles to learn their correct position in the image. Central to the model's success is a novel attention gating mechanism, which we condition with adversarial signals to act as a shape prior, resulting in better object localization at multiple scales. Subject to adversarial conditioning, the segmentor learns attention maps that are semantic, suppress the noisy activations outside the objects, and reduce the vanishing gradient problem in the deeper layers of the segmentor. We evaluated our model on several medical (ACDC, LVSC, CHAOS) and non-medical (PPSS) datasets, and we report performance levels matching those achieved by models trained with fully annotated segmentation masks. We also demonstrate extensions in a variety of settings: semi-supervised learning; combining multiple scribble sources (a crowdsourcing scenario) and multi-task learning (combining scribble and mask supervision). We release expert-made scribble annotations for the ACDC dataset, and the code used for the experiments, at https://vios-s.github.io/multiscale-adversarial-attention-gatesComment: Paper accepted for publication at: IEEE Transaction on Medical Imaging - Project page: https://vios-s.github.io/multiscale-adversarial-attention-gate

    Convolutional Neural Networks for the segmentation of microcalcification in Mammography Imaging

    Full text link
    Cluster of microcalcifications can be an early sign of breast cancer. In this paper we propose a novel approach based on convolutional neural networks for the detection and segmentation of microcalcification clusters. In this work we used 283 mammograms to train and validate our model, obtaining an accuracy of 98.22% in the detection of preliminary suspect regions and of 97.47% in the segmentation task. Our results show how deep learning could be an effective tool to effectively support radiologists during mammograms examination.Comment: 13 pages, 7 figure

    Sviluppo di un sistema di Deep Learning per segmentazione di immagini mammografiche

    No full text
    Il cancro al seno è una delle maggiori cause di mortalità tumorale fra le donne. La mammografia è l'esame di riferimento per lo screening di tumore alla mammella in donne con più di 40 anni: infatti, diverse meta-analisi hanno dimostrato una riduzione della mortalità per cancro al seno del 30%. Le microcalcificazioni possono rappresentare un segnale precoce per la diagnosi di tumore alla mammella, rintracciabile in immagini mammografiche, ma sono spesso di difficile interpretazione per i radiologi a causa della sovrapposizione di lesioni maligne e benigne. Esse appaiono nella mammografia come regioni ad elevata intensità rispetto al background locale e hanno forme che vanno da geometrie circolari a geometrie fortemente irregolari, con contorni più o meno netti. Breast Imaging Reporting and Dated System (BIRADS) ha standardizzato l'interpretazione delle microcalcificazioni: tipicamente benigne (BIRADS2), intermedie (BIRADS3), con elevata probabilità di essere maligne (BIRADS4), estremamente sospette di malignità (BIRADS5). La classificazione delle microcalcificazioni è basata sull'analisi della loro forma, densità e distribuzione all'interno della mammella. Sfortunatamente le microcalcificazioni sono spesso difficili da rintracciare in quanto la mammella contiene diverse quantità di tessuto connettivo, ghiandolare e adiposo, organizzate in strutture sempre differenti. Ne risulta una gran varietà di pattern all'interno delle immagini. La variabilità del tessuto mammario e la geometria di acquisizione proiettiva dell’immagine implicano l'impossibilità di utilizzare una semplice operazione di soglia basata sulla densità per il rintracciamento automatico delle calcificazioni. Il processo di detezione è ulteriormente complicato dalla grande variabilità della geometria delle microcalcificazioni, che inibisce una ricerca morfologica. Finora è stata proposta una gran varietà di algoritmi per il loro rintracciamento automatico, fra questi: metodi basati sulla trasformata wavelet, sistemi di filtraggio morfologico, analisi a multirisoluzione, reti bayesiane e SVM. Date le difficoltà che mostrano i metodi classici in questo particolare problema, nel lavoro di tesi si propone un approccio basato su una rete neurale profonda di tipo convolutivo

    Semi-supervised and weakly-supervised learning with spatio-temporal priors in medical image segmentation

    Get PDF
    Over the last decades, medical imaging techniques have played a crucial role in healthcare, supporting radiologists and facilitating patient diagnosis. With the advent of faster and higher-quality imaging technologies, the amount of data that is possible to collect for each patient is paving the way toward personalised medicine. As a result, automating simple image analysis operations, such as lesion localisation and quantification, would greatly help clinicians focus energy and attention on tasks best done by human intelligence. Most recently, Artificial Intelligence (AI) research is accelerating in healthcare, providing tools that often perform on par or even better than humans in conceptually simple image processing operations. In our work, we pay special attention to the problem of automating semantic segmentation, where an image is partitioned into multiple semantically meaningful regions, separating the anatomical components of interest. Unfortunately, developing effective AI segmentation tools usually needs large quantities of annotated data. Conversely, obtaining large-scale annotated datasets is difficult in medical imaging, as it requires experts and is time-consuming. For this reason, we develop automated methods to reduce the need for collecting high-quality annotated data, both in terms of the number and type of required annotations. We make this possible by constraining the data representation learned by our method to be semantic or by regularising the model predictions to satisfy data-driven spatio-temporal priors. In the thesis, we also open new avenues for future research using AI with limited annotations, which we believe is key to developing robust AI models for medical image analysis

    Re-using Adversarial Mask Discriminators for Test-time Training under Distribution Shifts

    Get PDF
    Thanks to their ability to learn flexible data-driven losses, Generative Adversarial Networks (GANs) are an integral part of many semi- and weakly-supervised methods for medical image segmentation. GANs jointly optimise a generator and an adversarial discriminator on a set of training data. After training is complete, the discriminator is usually discarded, and only the generator is used for inference. But should we discard discriminators? In this work, we argue that training stable discriminators produces expressive loss functions that we can re-use at inference to detect and \textit{correct} segmentation mistakes. First, we identify key challenges and suggest possible solutions to make discriminators re-usable at inference. Then, we show that we can combine discriminators with image reconstruction costs (via decoders) to endow a causal perspective to test-time training and further improve the model. Our method is simple and improves the test-time performance of pre-trained GANs. Moreover, we show that it is compatible with standard post-processing techniques and it has the potential to be used for Online Continual Learning. With our work, we open new research avenues for re-using adversarial discriminators at inference. Our code is available at https://vios-s.github.io/adversarial-test-time-training.Comment: Accepted for publication at the Journal of Machine Learning for Biomedical Imaging (MELBA) https://www.melba-journal.org/papers/2022:014.htm
    corecore