38,286 research outputs found
Differential geometric regularization for supervised learning of classifiers
We study the problem of supervised learning for both binary and multiclass classification from a unified geometric perspective. In particular, we propose a geometric regularization technique to find the submanifold corresponding to an estimator of the class probability P(y|\vec x). The regularization term measures the volume of this submanifold, based on the intuition that overfitting produces rapid local oscillations and hence large volume of the estimator. This technique can be applied to regularize any classification function that satisfies two requirements: firstly, an estimator of the class probability can be obtained; secondly, first and second derivatives of the class probability estimator can be calculated. In experiments, we apply our regularization technique to standard loss functions for classification, our RBF-based implementation compares favorably to widely used regularization methods for both binary and multiclass classification.http://proceedings.mlr.press/v48/baia16.pdfPublished versio
An extension of min/max flow framework
In this paper, the min/max flow scheme for image restoration is revised. The novelty consists of the fol-
24 lowing three parts. The first is to analyze the reason of the speckle generation and then to modify the
25 original scheme. The second is to point out that the continued application of this scheme cannot result
26 in an adaptive stopping of the curvature flow. This is followed by modifications of the original scheme
27 through the introduction of the Gradient Vector Flow (GVF) field and the zero-crossing detector, so as
28 to control the smoothing effect. Our experimental results with image restoration show that the proposed
29 schemes can reach a steady state solution while preserving the essential structures of objects. The third is
30 to extend the min/max flow scheme to deal with the boundary leaking problem, which is indeed an
31 intrinsic shortcoming of the familiar geodesic active contour model. The min/max flow framework pro-
32 vides us with an effective way to approximate the optimal solution. From an implementation point of
33 view, this extended scheme makes the speed function simpler and more flexible. The experimental
34 results of segmentation and region tracking show that the boundary leaking problem can be effectively
35 suppressed
Forecasting constraints from the cosmic microwave background on eternal inflation
We forecast the ability of cosmic microwave background (CMB) temperature and
polarization datasets to constrain theories of eternal inflation using cosmic
bubble collisions. Using the Fisher matrix formalism, we determine both the
overall detectability of bubble collisions and the constraints achievable on
the fundamental parameters describing the underlying theory. The CMB signatures
considered are based on state-of-the-art numerical relativistic simulations of
the bubble collision spacetime, evolved using the full temperature and
polarization transfer functions. Comparing a theoretical
cosmic-variance-limited experiment to the WMAP and Planck satellites, we find
that there is no improvement to be gained from future temperature data, that
adding polarization improves detectability by approximately 30%, and that
cosmic-variance-limited polarization data offer only marginal improvements over
Planck. The fundamental parameter constraints achievable depend on the precise
values of the tensor-to-scalar ratio and energy density in (negative) spatial
curvature. For a tensor-to-scalar ratio of and spatial curvature at the
level of , using cosmic-variance-limited data it is possible to
measure the width of the potential barrier separating the inflating false
vacuum from the true vacuum down to , and the initial proper
distance between colliding bubbles to a factor of the false vacuum
horizon size (at three sigma). We conclude that very near-future data will have
the final word on bubble collisions in the CMB.Comment: 14 pages, 6 figure
Learned-Norm Pooling for Deep Feedforward and Recurrent Neural Networks
In this paper we propose and investigate a novel nonlinear unit, called
unit, for deep neural networks. The proposed unit receives signals from
several projections of a subset of units in the layer below and computes a
normalized norm. We notice two interesting interpretations of the
unit. First, the proposed unit can be understood as a generalization of a
number of conventional pooling operators such as average, root-mean-square and
max pooling widely used in, for instance, convolutional neural networks (CNN),
HMAX models and neocognitrons. Furthermore, the unit is, to a certain
degree, similar to the recently proposed maxout unit (Goodfellow et al., 2013)
which achieved the state-of-the-art object recognition results on a number of
benchmark datasets. Secondly, we provide a geometrical interpretation of the
activation function based on which we argue that the unit is more
efficient at representing complex, nonlinear separating boundaries. Each
unit defines a superelliptic boundary, with its exact shape defined by the
order . We claim that this makes it possible to model arbitrarily shaped,
curved boundaries more efficiently by combining a few units of different
orders. This insight justifies the need for learning different orders for each
unit in the model. We empirically evaluate the proposed units on a number
of datasets and show that multilayer perceptrons (MLP) consisting of the
units achieve the state-of-the-art results on a number of benchmark datasets.
Furthermore, we evaluate the proposed unit on the recently proposed deep
recurrent neural networks (RNN).Comment: ECML/PKDD 201
- …