24 research outputs found
Uncertainty Estimation using the Local Lipschitz for Deep Learning Image Reconstruction Models
The use of supervised deep neural network approaches has been investigated to
solve inverse problems in all domains, especially radiology where imaging
technologies are at the heart of diagnostics. However, in deployment, these
models are exposed to input distributions that are widely shifted from training
data, due in part to data biases or drifts. It becomes crucial to know whether
a given input lies outside the training data distribution before relying on the
reconstruction for diagnosis. The goal of this work is three-fold: (i)
demonstrate use of the local Lipshitz value as an uncertainty estimation
threshold for determining suitable performance, (ii) provide method for
identifying out-of-distribution (OOD) images where the model may not have
generalized, and (iii) use the local Lipschitz values to guide proper data
augmentation through identifying false positives and decrease epistemic
uncertainty. We provide results for both MRI reconstruction and CT sparse view
to full view reconstruction using AUTOMAP and UNET architectures due to it
being pertinent in the medical domain that reconstructed images remain
diagnostically accurate
Learning Robust Node Representations on Graphs
Graph neural networks (GNN), as a popular methodology for node representation
learning on graphs, currently mainly focus on preserving the smoothness and
identifiability of node representations. A robust node representation on graphs
should further hold the stability property which means a node representation is
resistant to slight perturbations on the input. In this paper, we introduce the
stability of node representations in addition to the smoothness and
identifiability, and develop a novel method called contrastive graph neural
networks (CGNN) that learns robust node representations in an unsupervised
manner. Specifically, CGNN maintains the stability and identifiability by a
contrastive learning objective, while preserving the smoothness with existing
GNN models. Furthermore, the proposed method is a generic framework that can be
equipped with many other backbone models (e.g. GCN, GraphSage and GAT).
Extensive experiments on four benchmarks under both transductive and inductive
learning setups demonstrate the effectiveness of our method in comparison with
recent supervised and unsupervised models.Comment: 16 page