4 research outputs found

    Hinge-Wasserstein: Mitigating Overconfidence in Regression by Classification

    Full text link
    Modern deep neural networks are prone to being overconfident despite their drastically improved performance. In ambiguous or even unpredictable real-world scenarios, this overconfidence can pose a major risk to the safety of applications. For regression tasks, the regression-by-classification approach has the potential to alleviate these ambiguities by instead predicting a discrete probability density over the desired output. However, a density estimator still tends to be overconfident when trained with the common NLL loss. To mitigate the overconfidence problem, we propose a loss function, hinge-Wasserstein, based on the Wasserstein Distance. This loss significantly improves the quality of both aleatoric and epistemic uncertainty, compared to previous work. We demonstrate the capabilities of the new loss on a synthetic dataset, where both types of uncertainty are controlled separately. Moreover, as a demonstration for real-world scenarios, we evaluate our approach on the benchmark dataset Horizon Lines in the Wild. On this benchmark, using the hinge-Wasserstein loss reduces the Area Under Sparsification Error (AUSE) for horizon parameters slope and offset, by 30.47% and 65.00%, respectively

    Representation and learning of invariance

    No full text

    Representation and Learning of Invariance

    No full text
    A robust, fast and general method for estimation of object properties is proposed. It is based on a representation of theses properties in terms of channels. Each channel represents a particular value of a property, resembling the activity of biological neurons. Furthermore, each processing unit, corresponding to an artificial neuron, is a linear perceptron which operates on outer products of input data. This implies a more complex space of invariances than in the case of first order characteristic without abandoning linear theory. In general, the specific function of each processing unit has to to be learned and a fast and simple learning rule is presented. The channel representation, the processing structure and the learning rule has been tested on stereo image data showing a cube with various 3D positions and orientations. The system was able to learn a channel representation for the horizontal position, the depth, and the orientation of the cube, each property invariant to the other two
    corecore