142 research outputs found
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Segmentation in large-scale cellular electron microscopy with deep learning: A literature survey
Electron microscopy (EM) enables high-resolution imaging of tissues and cells based on 2D and 3D imaging techniques. Due to the laborious and time-consuming nature of manual segmentation of large-scale EM datasets, automated segmentation approaches are crucial. This review focuses on the progress of deep learning-based segmentation techniques in large-scale cellular EM throughout the last six years, during which significant progress has been made in both semantic and instance segmentation. A detailed account is given for the key datasets that contributed to the proliferation of deep learning in 2D and 3D EM segmentation. The review covers supervised, unsupervised, and self-supervised learning methods and examines how these algorithms were adapted to the task of segmenting cellular and sub-cellular structures in EM images. The special challenges posed by such images, like heterogeneity and spatial complexity, and the network architectures that overcame some of them are described. Moreover, an overview of the evaluation measures used to benchmark EM datasets in various segmentation tasks is provided. Finally, an outlook of current trends and future prospects of EM segmentation is given, especially with large-scale models and unlabeled images to learn generic features across EM datasets
Capsule networks: a new approach for brain imaging
Nel campo delle reti neurali per il riconoscimento immagini, una delle più recenti e promettenti innovazioni è l’utilizzo delle Capsule Networks (CapsNet).
Lo scopo di questo lavoro di tesi è studiare l'approccio CapsNet per l'analisi di immagini, in particolare per quelle neuroanatomiche. Le odierne tecniche di microscopia ottica, infatti, hanno posto sfide significative in termini di analisi dati, per l'elevata quantità di immagini disponibili e per la loro risoluzione sempre più fine. Con l'obiettivo di ottenere informazioni strutturali sulla corteccia cerebrale, nuove proposte di segmentazione possono rivelarsi molto utili.
Fino a questo momento, gli approcci più utilizzati in questo campo sono basati sulla Convolutional Neural Network (CNN), architettura che raggiunge le performance migliori rappresentando lo stato dell'arte dei risultati di Deep Learning.
Ci proponiamo, con questo studio, di aprire la strada ad un nuovo approccio che possa superare i limiti delle CNNs come, ad esempio, il numero di parametri utilizzati e l'accuratezza del risultato.
L’applicazione in neuroscienze delle CapsNets, basate sull’idea di emulare il funzionamento della visione e dell’elaborazione immagini nel cervello umano, concretizza un paradigma di ricerca stimolante volto a superare i limiti della conoscenza della natura e i limiti della natura stessa
Computational Models for Automated Histopathological Assessment of Colorectal Liver Metastasis Progression
PhDHistopathology imaging is a type of microscopy imaging commonly used for the microlevel clinical examination of a patient’s pathology. Due to the extremely large size of histopathology images, especially whole slide images (WSIs), it is difficult for pathologists to make a quantitative assessment by inspecting the details of a WSI. Hence, a computeraided system is necessary to provide a subjective and consistent assessment of the WSI for personalised treatment decisions. In this thesis, a deep learning framework for the automatic analysis of whole slide histopathology images is presented for the first time, which aims to address the challenging task of assessing and grading colorectal liver metastasis (CRLM). Quantitative evaluations of a patient’s condition with CRLM are conducted through quantifying different tissue components in resected tumorous specimens. This study mimics the visual examination process of human experts, by focusing on three levels of information, the tissue level, cell level and pixel level, to achieve the step by step segmentation of histopathology images. At the tissue level, patches with category information are utilised to analyse the WSIs. Both classification-based approaches and segmentation-based approaches are investigated to locate the metastasis region and quantify different components of the WSI. For the classification-based method, different factors that might affect the classification accuracy are explored using state-of-the-art deep convolutional neural networks (DCNNs). Furthermore, a novel network is proposed to merge the information from different magnification levels to include contextual information to support the final decision. With the support by the segmentation-based method, edge information from the image is integrated with the proposed fully convolutional neural network to further enhance the segmentation results. At the cell level, nuclei related information is examined to tackle the challenge of inadequate annotations. The problem is approached from two aspects: a weakly supervised nuclei detection and classification method is presented to model the nuclei in the CRLM by integrating a traditional image processing method and variational auto-encoder (VAE). A novel nuclei instance segmentation framework is proposed to boost the accuracy of the nuclei detection and segmentation using the idea of transfer learning. Afterwards, a fusion framework is proposed to enhance the tissue level segmentation results by leveraging the statistical and spatial properties of the cells. At the pixel level, the segmentation problem is tackled by introducing the information from the immunohistochemistry (IHC) stained images. Firstly, two data augmentation approaches, synthesis-based and transfer-based, are proposed to address the problem of insufficient pixel level segmentation. Afterwards, with the paired image and masks having been obtained, an end-to-end model is trained to achieve pixel level segmentation. Secondly, another novel weakly supervised approach based on the generative adversarial network (GAN) is proposed to explore the feasibility of transforming unpaired haematoxylin and eosin (HE) images to IHC stained images. Extensive experiments reveal that the virtually stained images can also be used for pixel level segmentation
AI-Enabled Contextual Representations for Image-based Integration in Health and Safety
Recent advancements in the area of Artificial Intelligence (AI) have made it the field of choice for automatically processing and summarizing information in big-data domains such as high-resolution images. This approach, however, is not a one-size-fits-all solution, and must be tailored to each application. Furthermore, each application comes with its own unique set of challenges including technical variations, validation of AI solutions, and contextual information. These challenges are addressed in three human-health and safety related applications: (i) an early warning system of slope failures in open-pit mining operations; (ii) the modeling and characterization of 3D cell culture models imaged with confocal microscopy; and (iii) precision medicine of biomarker discovery from patients with glioblastoma multiforme through digital pathology. The methodologies and results in each of these domains show how tailor-made AI solutions can be used for automatically extracting and summarizing pertinent information from big-data applications for enhanced decision making
The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis
Recently, deep learning frameworks have rapidly become the main methodology for analyzing medical images. Due to their powerful learning ability and advantages in dealing with complex patterns, deep learning algorithms are ideal for image analysis challenges, particularly in the field of digital pathology. The variety of image analysis tasks in the context of deep learning includes classification (e.g., healthy vs. cancerous tissue), detection (e.g., lymphocytes and mitosis counting), and segmentation (e.g., nuclei and glands segmentation). The majority of recent machine learning methods in digital pathology have a pre- and/or post-processing stage which is integrated with a deep neural network. These stages, based on traditional image processing methods, are employed to make the subsequent classification, detection, or segmentation problem easier to solve. Several studies have shown how the integration of pre- and post-processing methods within a deep learning pipeline can further increase the model's performance when compared to the network by itself. The aim of this review is to provide an overview on the types of methods that are used within deep learning frameworks either to optimally prepare the input (pre-processing) or to improve the results of the network output (post-processing), focusing on digital pathology image analysis. Many of the techniques presented here, especially the post-processing methods, are not limited to digital pathology but can be extended to almost any image analysis field
Semi-Weakly Supervised Learning for Label-efficient Semantic Segmentation in Expert-driven Domains
Unter Zuhilfenahme von Deep Learning haben semantische Segmentierungssysteme beeindruckende Ergebnisse erzielt, allerdings auf der Grundlage von überwachtem Lernen, das durch die Verfügbarkeit kostspieliger, pixelweise annotierter Bilder limitiert ist.
Bei der Untersuchung der Performance dieser Segmentierungssysteme in Kontexten, in denen kaum Annotationen vorhanden sind, bleiben sie hinter den hohen Erwartungen, die durch die Performance in annotationsreichen Szenarien geschürt werden, zurück.
Dieses Dilemma wiegt besonders schwer, wenn die Annotationen von lange geschultem Personal, z.B. Medizinern, Prozessexperten oder Wissenschaftlern, erstellt werden müssen.
Um gut funktionierende Segmentierungsmodelle in diese annotationsarmen, Experten-angetriebenen Domänen zu bringen, sind neue Lösungen nötig.
Zu diesem Zweck untersuchen wir zunächst, wie schlecht aktuelle Segmentierungsmodelle mit extrem annotationsarmen Szenarien in Experten-angetriebenen Bildgebungsdomänen zurechtkommen.
Daran schließt sich direkt die Frage an, ob die kostspielige pixelweise Annotation, mit der Segmentierungsmodelle in der Regel trainiert werden, gänzlich umgangen werden kann, oder ob sie umgekehrt ein Kosten-effektiver Anstoß sein kann, um die Segmentierung in Gang zu bringen, wenn sie sparsam eingestetzt wird.
Danach gehen wir auf die Frage ein, ob verschiedene Arten von Annotationen, schwache- und pixelweise Annotationen mit unterschiedlich hohen Kosten, gemeinsam genutzt werden können, um den Annotationsprozess flexibler zu gestalten.
Experten-angetriebene Domänen haben oft nicht nur einen Annotationsmangel, sondern auch völlig andere Bildeigenschaften, beispielsweise volumetrische Bild-Daten.
Der Übergang von der 2D- zur 3D-semantischen Segmentierung führt zu voxelweisen Annotationsprozessen, was den nötigen Zeitaufwand für die Annotierung mit der zusätzlichen Dimension multipliziert.
Um zu einer handlicheren Annotation zu gelangen, untersuchen wir Trainingsstrategien für Segmentierungsmodelle, die nur preiswertere, partielle Annotationen oder rohe, nicht annotierte Volumina benötigen.
Dieser Wechsel in der Art der Überwachung im Training macht die Anwendung der Volumensegmentierung in Experten-angetriebenen Domänen realistischer, da die Annotationskosten drastisch gesenkt werden und die Annotatoren von Volumina-Annotationen befreit werden, welche naturgemäß auch eine Menge visuell redundanter Regionen enthalten würden.
Schließlich stellen wir die Frage, ob es möglich ist, die Annotations-Experten von der strikten Anforderung zu befreien, einen einzigen, spezifischen Annotationstyp liefern zu müssen, und eine Trainingsstrategie zu entwickeln, die mit einer breiten Vielfalt semantischer Information funktioniert.
Eine solche Methode wurde hierzu entwickelt und in unserer umfangreichen experimentellen Evaluierung kommen interessante Eigenschaften verschiedener Annotationstypen-Mixe in Bezug auf deren Segmentierungsperformance ans Licht.
Unsere Untersuchungen führten zu neuen Forschungsrichtungen in der semi-weakly überwachten Segmentierung, zu neuartigen, annotationseffizienteren Methoden und Trainingsstrategien sowie zu experimentellen Erkenntnissen, zur Verbesserung von Annotationsprozessen, indem diese annotationseffizient, expertenzentriert und flexibel gestaltet werden
Few-shot hypercolumn-based mitochondria segmentation in cardiac and outer hair cells in focused ion beam-scanning electron microscopy (FIB-SEM) data
We present a novel AI-based approach to the few-shot automated segmentation of mitochondria in large-scale electron microscopy images. Our framework leverages convolutional features from a pre-trained deep multilayer convolutional neural network, such as VGG-16. We then train a binary gradient boosting classifier on the resulting high-dimensional feature hypercolumns. We extract VGG-16 features from the first four convolutional blocks and apply bilinear upsampling to resize the obtained maps to the input image size. This procedure yields a 2688-dimensional feature hypercolumn for each pixel in a 224 x 224 input image. We then apply L1-regularized logistic regression for supervised active feature selection to reduce dependencies among the features, to reduce overfitting, as well as to speed-up gradient boosting-based training. During inference we block process 1728 x 2022 large microscopy images. Our experiments show that in such a formulation of transfer learning our processing pipeline is able to achieve high-accuracy results on very challenging datasets containing a large number of irregularly shaped mitochondria in cardiac and outer hair cells. Our proposed few-shot training approach gives competitive performance with the state-of-the-art using far less training data
Development and application of molecular and computational tools to image copper in cells
Copper is a trace element which is essential for many biological processes. A deficiency or excess of copper(I) ions, which is its main oxidation state of copper in cellular environment, is increasingly linked to the development of neurodegenerative diseases such as Parkinson’s and Alzheimer’s disease (PD and AD). The regulatory mechanisms for copper(I) are under active investigation and lysosomes which are best known as cellular “incinerators” have been found to play an important role in the trafficking of copper inside the cell. Therefore, it is important to develop reliable experimental methods to detect, monitor and visualise this metal in cells and to develop tools that allow to improve the data quality of microscopy recordings. This would enable the detailed exploration of cellular processes related to copper trafficking through lysosomes. The research presented in this thesis aimed to develop chemical and computational tools that can help to investigate concentration changes of copper(I) in cells (particularly in lysosomes), and it presents a preliminary case study that uses the here developed microscopy image quality enhancement tools to investigate lysosomal mobility changes upon treatment of cells with different PD or AD drugs.
Chapter I first reports the synthesis of a previously reported copper(I) probe (CS3). The photophysical properties of this probe and functionality on different cell lines was tested and it was found that this copper(I) sensor predominantly localized in lipid droplets and that its photostability and quantum yield were insufficient to be applied for long term investigations of cellular copper trafficking. Therefore, based on the insights of this probe a new copper(I) selective fluorescent probe (FLCS1) was designed, synthesized, and characterized which showed superior photophysical properties (photostability, quantum yield) over CS3. The probe showed selectivity for copper(I) over other physiological relevant metals and showed strong colocalization in lysosomes in SH-SY5Y cells. This probe was then used to study and monitor lysosomal copper(I) levels via fluorescence lifetime imaging microscopy (FLIM); to the best of my knowledge this is the first copper(I) probe based on emission lifetime.
Chapter II explores different computational deep learning approaches for improving the quality of recorded microscopy images. In total two existing networks were tested (fNET, CARE) and four new networks were implemented, tested, and benchmarked for their capabilities of improving the signal-to-noise ratio, upscaling the image size (GMFN, SRFBN-S, Zooming SlowMo) and interpolating image sequences (DAIN, Zooming SlowMo) in z- and t-dimension of multidimensional simulated and real-world datasets. The best performing networks of each category were then tested in combination by sequentially applying them on a low signal-to-noise ratio, low resolution, and low frame-rate image sequence. This image enhancement workstream for investigating lysosomal mobility was established. Additionally, the new frame interpolation networks were implemented in user-friendly Google Colab notebooks and were made publicly available to the scientific community on the ZeroCostDL4Mic platform.
Chapter III provides a preliminary case study where the newly developed fluorescent copper(I) probe in combination with the computational enhancement algorithms was used to investigate the effects of five potential Parkinson’s disease drugs (rapamycin, digoxin, curcumin, trehalose, bafilomycin A1) on the mobility of lysosomes in live cells.Open Acces
- …