23 research outputs found

    Learning to Segment from Scribbles using Multi-scale Adversarial Attention Gates

    Get PDF
    Large, fine-grained image segmentation datasets, annotated at pixel-level, are difficult to obtain, particularly in medical imaging, where annotations also require expert knowledge. Weakly-supervised learning can train models by relying on weaker forms of annotation, such as scribbles. Here, we learn to segment using scribble annotations in an adversarial game. With unpaired segmentation masks, we train a multi-scale GAN to generate realistic segmentation masks at multiple resolutions, while we use scribbles to learn their correct position in the image. Central to the model's success is a novel attention gating mechanism, which we condition with adversarial signals to act as a shape prior, resulting in better object localization at multiple scales. Subject to adversarial conditioning, the segmentor learns attention maps that are semantic, suppress the noisy activations outside the objects, and reduce the vanishing gradient problem in the deeper layers of the segmentor. We evaluated our model on several medical (ACDC, LVSC, CHAOS) and non-medical (PPSS) datasets, and we report performance levels matching those achieved by models trained with fully annotated segmentation masks. We also demonstrate extensions in a variety of settings: semi-supervised learning; combining multiple scribble sources (a crowdsourcing scenario) and multi-task learning (combining scribble and mask supervision). We release expert-made scribble annotations for the ACDC dataset, and the code used for the experiments, at https://vios-s.github.io/multiscale-adversarial-attention-gatesComment: Paper accepted for publication at: IEEE Transaction on Medical Imaging - Project page: https://vios-s.github.io/multiscale-adversarial-attention-gate

    Convolutional Neural Networks for the segmentation of microcalcification in Mammography Imaging

    Full text link
    Cluster of microcalcifications can be an early sign of breast cancer. In this paper we propose a novel approach based on convolutional neural networks for the detection and segmentation of microcalcification clusters. In this work we used 283 mammograms to train and validate our model, obtaining an accuracy of 98.22% in the detection of preliminary suspect regions and of 97.47% in the segmentation task. Our results show how deep learning could be an effective tool to effectively support radiologists during mammograms examination.Comment: 13 pages, 7 figure

    Controllable Image Synthesis of Industrial Data Using Stable Diffusion

    Full text link
    Training supervised deep neural networks that perform defect detection and segmentation requires large-scale fully-annotated datasets, which can be hard or even impossible to obtain in industrial environments. Generative AI offers opportunities to enlarge small industrial datasets artificially, thus enabling the usage of state-of-the-art supervised approaches in the industry. Unfortunately, also good generative models need a lot of data to train, while industrial datasets are often tiny. Here, we propose a new approach for reusing general-purpose pre-trained generative models on industrial data, ultimately allowing the generation of self-labelled defective images. First, we let the model learn the new concept, entailing the novel data distribution. Then, we force it to learn to condition the generative process, producing industrial images that satisfy well-defined topological characteristics and show defects with a given geometry and location. To highlight the advantage of our approach, we use the synthetic dataset to optimise a crack segmentor for a real industrial use case. When the available data is small, we observe considerable performance increase under several metrics, showing the method's potential in production environments

    Sviluppo di un sistema di Deep Learning per segmentazione di immagini mammografiche

    No full text
    Il cancro al seno è una delle maggiori cause di mortalità tumorale fra le donne. La mammografia è l'esame di riferimento per lo screening di tumore alla mammella in donne con più di 40 anni: infatti, diverse meta-analisi hanno dimostrato una riduzione della mortalità per cancro al seno del 30%. Le microcalcificazioni possono rappresentare un segnale precoce per la diagnosi di tumore alla mammella, rintracciabile in immagini mammografiche, ma sono spesso di difficile interpretazione per i radiologi a causa della sovrapposizione di lesioni maligne e benigne. Esse appaiono nella mammografia come regioni ad elevata intensità rispetto al background locale e hanno forme che vanno da geometrie circolari a geometrie fortemente irregolari, con contorni più o meno netti. Breast Imaging Reporting and Dated System (BIRADS) ha standardizzato l'interpretazione delle microcalcificazioni: tipicamente benigne (BIRADS2), intermedie (BIRADS3), con elevata probabilità di essere maligne (BIRADS4), estremamente sospette di malignità (BIRADS5). La classificazione delle microcalcificazioni è basata sull'analisi della loro forma, densità e distribuzione all'interno della mammella. Sfortunatamente le microcalcificazioni sono spesso difficili da rintracciare in quanto la mammella contiene diverse quantità di tessuto connettivo, ghiandolare e adiposo, organizzate in strutture sempre differenti. Ne risulta una gran varietà di pattern all'interno delle immagini. La variabilità del tessuto mammario e la geometria di acquisizione proiettiva dell’immagine implicano l'impossibilità di utilizzare una semplice operazione di soglia basata sulla densità per il rintracciamento automatico delle calcificazioni. Il processo di detezione è ulteriormente complicato dalla grande variabilità della geometria delle microcalcificazioni, che inibisce una ricerca morfologica. Finora è stata proposta una gran varietà di algoritmi per il loro rintracciamento automatico, fra questi: metodi basati sulla trasformata wavelet, sistemi di filtraggio morfologico, analisi a multirisoluzione, reti bayesiane e SVM. Date le difficoltà che mostrano i metodi classici in questo particolare problema, nel lavoro di tesi si propone un approccio basato su una rete neurale profonda di tipo convolutivo

    Semi-supervised and weakly-supervised learning with spatio-temporal priors in medical image segmentation

    Get PDF
    Over the last decades, medical imaging techniques have played a crucial role in healthcare, supporting radiologists and facilitating patient diagnosis. With the advent of faster and higher-quality imaging technologies, the amount of data that is possible to collect for each patient is paving the way toward personalised medicine. As a result, automating simple image analysis operations, such as lesion localisation and quantification, would greatly help clinicians focus energy and attention on tasks best done by human intelligence. Most recently, Artificial Intelligence (AI) research is accelerating in healthcare, providing tools that often perform on par or even better than humans in conceptually simple image processing operations. In our work, we pay special attention to the problem of automating semantic segmentation, where an image is partitioned into multiple semantically meaningful regions, separating the anatomical components of interest. Unfortunately, developing effective AI segmentation tools usually needs large quantities of annotated data. Conversely, obtaining large-scale annotated datasets is difficult in medical imaging, as it requires experts and is time-consuming. For this reason, we develop automated methods to reduce the need for collecting high-quality annotated data, both in terms of the number and type of required annotations. We make this possible by constraining the data representation learned by our method to be semantic or by regularising the model predictions to satisfy data-driven spatio-temporal priors. In the thesis, we also open new avenues for future research using AI with limited annotations, which we believe is key to developing robust AI models for medical image analysis
    corecore