25,568 research outputs found
Network Synchronization with Convexity
In this paper, we establish a few new synchronization conditions for complex
networks with nonlinear and nonidentical self-dynamics with switching directed
communication graphs. In light of the recent works on distributed sub-gradient
methods, we impose integral convexity for the nonlinear node self-dynamics in
the sense that the self-dynamics of a given node is the gradient of some
concave function corresponding to that node. The node couplings are assumed to
be linear but with switching directed communication graphs. Several sufficient
and/or necessary conditions are established for exact or approximate
synchronization over the considered complex networks. These results show when
and how nonlinear node self-dynamics may cooperate with the linear diffusive
coupling, which eventually leads to network synchronization conditions under
relaxed connectivity requirements.Comment: Based on our previous manuscript arXiv:1210.6685. SIAM Journal on
Control and Optimization, in press 201
Optimality of Orthogonal Access for One-dimensional Convex Cellular Networks
It is shown that a greedy orthogonal access scheme achieves the sum degrees
of freedom of all one-dimensional (all nodes placed along a straight line)
convex cellular networks (where cells are convex regions) when no channel
knowledge is available at the transmitters except the knowledge of the network
topology. In general, optimality of orthogonal access holds neither for
two-dimensional convex cellular networks nor for one-dimensional non-convex
cellular networks, thus revealing a fundamental limitation that exists only
when both one-dimensional and convex properties are simultaneously enforced, as
is common in canonical information theoretic models for studying cellular
networks. The result also establishes the capacity of the corresponding class
of index coding problems
Adaptive Normalized Risk-Averting Training For Deep Neural Networks
This paper proposes a set of new error criteria and learning approaches,
Adaptive Normalized Risk-Averting Training (ANRAT), to attack the non-convex
optimization problem in training deep neural networks (DNNs). Theoretically, we
demonstrate its effectiveness on global and local convexity lower-bounded by
the standard -norm error. By analyzing the gradient on the convexity index
, we explain the reason why to learn adaptively using
gradient descent works. In practice, we show how this method improves training
of deep neural networks to solve visual recognition tasks on the MNIST and
CIFAR-10 datasets. Without using pretraining or other tricks, we obtain results
comparable or superior to those reported in recent literature on the same tasks
using standard ConvNets + MSE/cross entropy. Performance on deep/shallow
multilayer perceptrons and Denoised Auto-encoders is also explored. ANRAT can
be combined with other quasi-Newton training methods, innovative network
variants, regularization techniques and other specific tricks in DNNs. Other
than unsupervised pretraining, it provides a new perspective to address the
non-convex optimization problem in DNNs.Comment: AAAI 2016, 0.39%~0.4% ER on MNIST with single 32-32-256-10 ConvNets,
code available at https://github.com/cauchyturing/ANRA
- …