12,613 research outputs found
Confidence Estimation Using Unlabeled Data
Overconfidence is a common issue for deep neural networks, limiting their
deployment in real-world applications. To better estimate confidence, existing
methods mostly focus on fully-supervised scenarios and rely on training labels.
In this paper, we propose the first confidence estimation method for a
semi-supervised setting, when most training labels are unavailable. We
stipulate that even with limited training labels, we can still reasonably
approximate the confidence of model on unlabeled samples by inspecting the
prediction consistency through the training process. We use training
consistency as a surrogate function and propose a consistency ranking loss for
confidence estimation. On both image classification and segmentation tasks, our
method achieves state-of-the-art performances in confidence estimation.
Furthermore, we show the benefit of the proposed method through a downstream
active learning task. The code is available at
https://github.com/TopoXLab/consistency-ranking-lossComment: Accepted by ICLR'2
Learning Probabilistic Topological Representations Using Discrete Morse Theory
Accurate delineation of fine-scale structures is a very important yet
challenging problem. Existing methods use topological information as an
additional training loss, but are ultimately making pixel-wise predictions. In
this paper, we propose the first deep learning based method to learn
topological/structural representations. We use discrete Morse theory and
persistent homology to construct an one-parameter family of structures as the
topological/structural representation space. Furthermore, we learn a
probabilistic model that can perform inference tasks in such a
topological/structural representation space. Our method generates true
structures rather than pixel-maps, leading to better topological integrity in
automatic segmentation tasks. It also facilitates semi-automatic interactive
annotation/proofreading via the sampling of structures and structure-aware
uncertainty.Comment: 16 pages, 11 figure
Exploring the total Galactic extinction with SDSS BHB stars
Aims: We used 12,530 photometrically-selected blue horizontal branch (BHB)
stars from the Sloan Digital Sky Survey (SDSS) to estimate the total extinction
of the Milky Way at the high Galactic latitudes, and in each line
of sight. Methods: A Bayesian method was developed to estimate the reddening
values in the given lines of sight. Based on the most likely values of
reddening in multiple colors, we were able to derive the values of and
.
Results: We selected 94 zero-reddened BHB stars from seven globular clusters
as the template. The reddening in the four SDSS colors for the northern
Galactic cap were estimated by comparing the field BHB stars with the template
stars. The accuracy of this estimation is around 0.01\,mag for most lines of
sight. We also obtained to be around 2.40 and map within
an uncertainty of 0.1\,mag. The results, including reddening values in the four
SDSS colors, , and in each line of sight, are released on line. In
this work, we employ an up-to-date parallel technique on GPU card to overcome
time-consuming computations. We plan to release online the C++ CUDA code used
for this analysis.
Conclusions: The extinction map derived from BHB stars is highly consistent
with that from Schlegel, Finkbeiner & Davis(1998). The derived is around
2.40. The contamination probably makes the be larger.Comment: 16 pages, 13 figures, 4 tables, accepted for publication in A&
- …