8 research outputs found

    ADNet++: A few-shot learning framework for multi-class medical image volume segmentation with uncertainty-guided feature refinement

    Get PDF
    A major barrier to applying deep segmentation models in the medical domain is their typical data-hungry nature, requiring experts to collect and label large amounts of data for training. As a reaction, prototypical few-shot segmentation (FSS) models have recently gained traction as data-efficient alternatives. Nevertheless, despite the recent progress of these models, they still have some essential shortcomings that must be addressed. In this work, we focus on three of these shortcomings: (i) the lack of uncertainty estimation, (ii) the lack of a guiding mechanism to help locate edges and encourage spatial consistency in the segmentation maps, and (iii) the models’ inability to do one-step multi-class segmentation. Without modifying or requiring a specific backbone architecture, we propose a modified prototype extraction module that facilitates the computation of uncertainty maps in prototypical FSS models, and show that the resulting maps are useful indicators of the model uncertainty. To improve the segmentation around boundaries and to encourage spatial consistency, we propose a novel feature refinement module that leverages structural information in the input space to help guide the segmentation in the feature space. Furthermore, we demonstrate how uncertainty maps can be used to automatically guide this feature refinement. Finally, to avoid ambiguous voxel predictions that occur when images are segmented class-by-class, we propose a procedure to perform one-step multi-class FSS. The efficiency of our proposed methodology is evaluated on two representative datasets for abdominal organ segmentation (CHAOS dataset and BTCV dataset) and one dataset for cardiac segmentation (MS-CMRSeg dataset). The results show that our proposed methodology significantly (one-sided Wilcoxon signed rank test, p < 0.5) improves the baseline, increasing the overall dice score with +5.2, +5.1, and +2.8 percentage points for the CHAOS dataset, the BTCV dataset, and the MS-CMRSeg dataset, respectively

    Automatic identification of chemical moieties

    Get PDF
    In recent years, the prediction of quantum mechanical observables with machine learning methods has become increasingly popular. Message-passing neural networks (MPNNs) solve this task by constructing atomic representations, from which the properties of interest are predicted. Here, we introduce a method to automatically identify chemical moieties (molecular building blocks) from such representations, enabling a variety of applications beyond property prediction, which otherwise rely on expert knowledge. The required representation can either be provided by a pretrained MPNN, or be learned from scratch using only structural information. Beyond the data-driven design of molecular fingerprints, the versatility of our approach is demonstrated by enabling the selection of representative entries in chemical databases, the automatic construction of coarse-grained force fields, as well as the identification of reaction coordinates

    Advancing Segmentation and Unsupervised Learning Within the Field of Deep Learning

    Get PDF
    Due to the large improvements that deep learning based models have brought to a variety of tasks, they have in recent years received large amounts of attention. However, these improvements are to a large extent achieved in supervised settings, where labels are available, and initially focused on traditional computer vision tasks such as visual object recognition. Specific application domains that consider images of large size and multi-modal images, as well as applications where labeled training data is challenging to obtain, has instead received less attention. This thesis aims to fill these gaps from two overall perspectives. First, we advance segmentation approaches specifically targeted towards the applications of remote sensing and medical imaging. Second, inspired by the lack of labeled data in many high-impact domains, such as medical imaging, we advance four unsupervised deep learning tasks: domain adaptation, clustering, representation learning, and zero-shot learning. The works on segmentation address the challenges of class-imbalance, missing data-modalities and the modeling of uncertainty in remote sensing. Founded on the idea of pixel-connectivity, we further propose a novel approach to saliency segmentation, a common pre-processing task. We illustrate that phrasing the problem as a connectivity prediction problem, allows us to achieve good performance while keeping the model simple. Finally, connecting our work on segmentation and unsupervised deep learning, we propose an approach to unsupervised domain adaptation in a segmentation setting in the medical domain. Besides unsupervised domain adaptation, we further propose a novel approach to clustering based on integrating ideas from kernel methods and information theoretic learning achieving promising results. Based on our intuition that meaningful representations should incorporate similarities between data points, we further propose a kernelized autoencoder. Finally, we address the task of zero-shot learning based on improving knowledge propagation in graph convolutional neural networks, achieving state-of-the-art performance on the 21K class ImageNet dataset

    Parallelization of the Alternating-Least-Squares Algorithm With Weighted Regularization for Efficient GPU Execution in Recommender Systems

    No full text
    Collaborative filtering recommender systems have become essential to many Internet services, providing, for instance, book recommendations at Amazon's online e-commerce service, music recommendation in Spotify and movie recommendation in Netflix. Matrix factorization and Restricted Boltzmann Machines (RBMs) are two popular methods for implementing recommender systems, both providing superior accuracy over common neighborhood models. Both methods also shift much of the computation from the prediction phase to the model training phase, which enables fast predictions once the model has been trained. This thesis suggests a novel approach for performing matrix factorization using the Alternating-Least-Squares with Weighted-Lambda-Regularization (ALS-WR) algorithm on CUDA (ALS-CUDA). The algorithm is implemented and evaluated in the context of recommender systems by comparing it to other commonly used approaches. These include an RBM and a stochastic gradient descent (SGD) approach. Our evaluation shows that significant speedups can be achieved by using CUDA and GPUs for training recommender systems. The ALS-CUDA algorithm implemented in this thesis provided speedup factors of up to 175.4 over the sequential CPU ALS implementation and scales linearly with the number of CUDA threads assigned to it until the GPUs shared memory has been saturated. Comparing the performance of the ALS-CUDA algorithm to CUDA implementations of the SGD and the RBM algorithms shows that the ALS-CUDA algorithm outperformed the RBM. For a sparse dataset, results indicate that the ALS-CUDA algorithm performs slightly worse than the SGD implementation, while for a dense dataset, ALS-CUDA outperforms the SGD. However, generally the advantage of the ALS-CUDA algorithm does not necessarily lie in its speed, but also in the fact that it requires fewer parameters than the SGD. It therefore represents a viable option when some speed can be traded off for algorithmic stability, or when the dataset is dense

    "Numerical modeling of microwave interactions with sea ice"

    Get PDF
    Remote sensing is a key instrument for monitoring sea ice surface properties over large areas. Synthetic Aperture Radar (SAR) as well as Real Aperture Radar (RAR) are two types of radars that are extensively used in this context and measure the backscatter of the surface that they illuminate. Backscattering of waves from rough surfaces is complicated and depends, among other things, on the roughness of the illuminated surface and the surfaces material properties. This thesis focuses on modeling the backscattering cross section from sea ice layers with rough surfaces on top of sea water, by designing a model that builds on the physical basis of electromagnetic wave theory and combines it with the Finite Element Method (FEM) approach. The model is designed as general as possible and can be adapted to various sea ice scenarios by modifying the chosen surface and material properties. Temperature, Density and Salinity (TDS) fieldwork measurements from Van Keulenfjord on Svalbard have been used to estimate realistic continuous permittivity profiles of sea ice using the Polder-van-Santen/de Loor mixture model and have been incorporated into the model. The model has been validated by comparing its results for a perfectly flat surface to the Fresnel equations and a perfect agreement was achieved. It was also successfully validated using the Bragg scattering phenomena for periodic surfaces. Furthermore, a comparison between the results of the model and the Small Pertubation Model (SPM) was done for a slightly rough surface at different frequencies and permittivities, and clear similarities were observed. Based on the confidence from the validations, the backscatter cross section of a sea ice/sea water scenario with continuous permittivity profiles has then been modeled

    Quantifying the climate impact of emissions from land-based transport in Germany

    Get PDF
    Although climate change is a global problem, specific mitigation measures are frequently applied on regional or national scales only. This is the case in particular for measures to reduce the emissions of land-based transport, which is largely characterized by regional or national systems with independent infrastructure, organization, and regulation. The climate perturbations caused by regional transport emissions are small compared to those resulting from global emissions. Consequently, they can be smaller than the detection limits in global three-dimensional chemistry-climate model simulations, hampering the Evaluation of the climate benefit of mitigation strategies. Hence, we developed a new Approach to solve this problem

    Quantifying the climate impact of emissions from land-based transport in Germany

    No full text
    Although climate change is a global problem, specific mitigation measures are frequently applied on regional or national scales only. This is the case in particular for measures to reduce the emissions of land-based transport, which is largely characterized by regional or national systems with independent infrastructure, organization, and regulation. The climate perturbations caused by regional transport emissions are small compared to those resulting from global emissions. Consequently, they can be smaller than the detection limits in global three-dimensional chemistry-climate model simulations, hampering the evaluation of the climate benefit of mitigation strategies. Hence, we developed a new approach to solve this problem. The approach is based on a combination of a detailed three-dimensional global chemistry-climate model system, aerosol-climate response functions, and a zero-dimensional climate response model. For demonstration purposes, the approach was applied to results from a transport and emission modeling suite, which was designed to quantify the present-day and possible future transport activities in Germany and the resulting emissions. The results show that, in a baseline scenario, German transport emissions result in an increase in global mean surface temperature of the order of 0.01 K during the 21st century. This effect is dominated by the CO2 emissions, in contrast to the impact of global transport emissions, where non-CO2 species make a larger relative contribution to transport-induced climate change than in the case of German emissions. Our new approach is ready for operational use to evaluate the climate benefit of mitigation strategies to reduce the impact of transport emissions
    corecore