134,218 research outputs found
Sparse Hopfield network reconstruction with regularization
We propose an efficient strategy to infer sparse Hopfield network based on
magnetizations and pairwise correlations measured through Glauber samplings.
This strategy incorporates the regularization into the Bethe
approximation by a quadratic approximation to the log-likelihood, and is able
to further reduce the inference error of the Bethe approximation without the
regularization. The optimal regularization parameter is observed to be of the
order of where is the number of independent samples. The value
of the scaling exponent depends on the performance measure.
for root mean squared error measure while for
misclassification rate measure. The efficiency of this strategy is demonstrated
for the sparse Hopfield model, but the method is generally applicable to other
diluted mean field models. In particular, it is simple in implementation
without heavy computational cost.Comment: 9 pages, 3 figures, Eur. Phys. J. B (in press
Generalized incompressible flows, multi-marginal transport and Sinkhorn algorithm
Starting from Brenier's relaxed formulation of the incompressible Euler
equation in terms of geodesics in the group of measure-preserving
diffeomorphisms, we propose a numerical method based on Sinkhorn's algorithm
for the entropic regularization of optimal transport. We also make a detailed
comparison of this entropic regularization with the so-called Bredinger
entropic interpolation problem. Numerical results in dimension one and two
illustrate the feasibility of the method
Capacity Control of ReLU Neural Networks by Basis-path Norm
Recently, path norm was proposed as a new capacity measure for neural
networks with Rectified Linear Unit (ReLU) activation function, which takes the
rescaling-invariant property of ReLU into account. It has been shown that the
generalization error bound in terms of the path norm explains the empirical
generalization behaviors of the ReLU neural networks better than that of other
capacity measures. Moreover, optimization algorithms which take path norm as
the regularization term to the loss function, like Path-SGD, have been shown to
achieve better generalization performance. However, the path norm counts the
values of all paths, and hence the capacity measure based on path norm could be
improperly influenced by the dependency among different paths. It is also known
that each path of a ReLU network can be represented by a small group of
linearly independent basis paths with multiplication and division operation,
which indicates that the generalization behavior of the network only depends on
only a few basis paths. Motivated by this, we propose a new norm
\emph{Basis-path Norm} based on a group of linearly independent paths to
measure the capacity of neural networks more accurately. We establish a
generalization error bound based on this basis path norm, and show it explains
the generalization behaviors of ReLU networks more accurately than previous
capacity measures via extensive experiments. In addition, we develop
optimization algorithms which minimize the empirical risk regularized by the
basis-path norm. Our experiments on benchmark datasets demonstrate that the
proposed regularization method achieves clearly better performance on the test
set than the previous regularization approaches
- …