2 research outputs found
Learning to predict crisp boundaries
Recent methods for boundary or edge detection built on Deep Convolutional
Neural Networks (CNNs) typically suffer from the issue of predicted edges being
thick and need post-processing to obtain crisp boundaries. Highly imbalanced
categories of boundary versus background in training data is one of main
reasons for the above problem. In this work, the aim is to make CNNs produce
sharp boundaries without post-processing. We introduce a novel loss for
boundary detection, which is very effective for classifying imbalanced data and
allows CNNs to produce crisp boundaries. Moreover, we propose an end-to-end
network which adopts the bottom-up/top-down architecture to tackle the task.
The proposed network effectively leverages hierarchical features and produces
pixel-accurate boundary mask, which is critical to reconstruct the edge map.
Our experiments illustrate that directly making crisp prediction not only
promotes the visual results of CNNs, but also achieves better results against
the state-of-the-art on the BSDS500 dataset (ODS F-score of .815) and the NYU
Depth dataset (ODS F-score of .762).Comment: Accepted to European Conf. Computer Vision (ECCV) 201
Generative and Discriminative Learning with Unknown Labeling Bias
We apply robust Bayesian decision theory to improve both generative and discriminative learners under bias in class proportions in labeled training data, when the true class proportions are unknown. For the generative case, we derive an entropybased weighting that maximizes expected log likelihood under the worst-case true class proportions. For the discriminative case, we derive a multinomial logistic model that minimizes worst-case conditional log loss. We apply our theory to the modeling of species geographic distributions from presence data, an extreme case of labeling bias since there is no absence data. On a benchmark dataset, we find that entropy-based weighting offers an improvement over constant estimates of class proportions, consistently reducing log loss on unbiased test data.