247,856 research outputs found
Complexity of Model Testing for Dynamical Systems with Toric Steady States
In this paper we investigate the complexity of model selection and model
testing for dynamical systems with toric steady states. Such systems frequently
arise in the study of chemical reaction networks. We do this by formulating
these tasks as a constrained optimization problem in Euclidean space. This
optimization problem is known as a Euclidean distance problem; the complexity
of solving this problem is measured by an invariant called the Euclidean
distance (ED) degree. We determine closed-form expressions for the ED degree of
the steady states of several families of chemical reaction networks with toric
steady states and arbitrarily many reactions. To illustrate the utility of this
work we show how the ED degree can be used as a tool for estimating the
computational cost of solving the model testing and model selection problems
A Tutorial on Clique Problems in Communications and Signal Processing
Since its first use by Euler on the problem of the seven bridges of
K\"onigsberg, graph theory has shown excellent abilities in solving and
unveiling the properties of multiple discrete optimization problems. The study
of the structure of some integer programs reveals equivalence with graph theory
problems making a large body of the literature readily available for solving
and characterizing the complexity of these problems. This tutorial presents a
framework for utilizing a particular graph theory problem, known as the clique
problem, for solving communications and signal processing problems. In
particular, the paper aims to illustrate the structural properties of integer
programs that can be formulated as clique problems through multiple examples in
communications and signal processing. To that end, the first part of the
tutorial provides various optimal and heuristic solutions for the maximum
clique, maximum weight clique, and -clique problems. The tutorial, further,
illustrates the use of the clique formulation through numerous contemporary
examples in communications and signal processing, mainly in maximum access for
non-orthogonal multiple access networks, throughput maximization using index
and instantly decodable network coding, collision-free radio frequency
identification networks, and resource allocation in cloud-radio access
networks. Finally, the tutorial sheds light on the recent advances of such
applications, and provides technical insights on ways of dealing with mixed
discrete-continuous optimization problems
How degenerate is the parametrization of neural networks with the ReLU activation function?
Neural network training is usually accomplished by solving a non-convex
optimization problem using stochastic gradient descent. Although one optimizes
over the networks parameters, the main loss function generally only depends on
the realization of the neural network, i.e. the function it computes. Studying
the optimization problem over the space of realizations opens up new ways to
understand neural network training. In particular, usual loss functions like
mean squared error and categorical cross entropy are convex on spaces of neural
network realizations, which themselves are non-convex. Approximation
capabilities of neural networks can be used to deal with the latter
non-convexity, which allows us to establish that for sufficiently large
networks local minima of a regularized optimization problem on the realization
space are almost optimal. Note, however, that each realization has many
different, possibly degenerate, parametrizations. In particular, a local
minimum in the parametrization space needs not correspond to a local minimum in
the realization space. To establish such a connection, inverse stability of the
realization map is required, meaning that proximity of realizations must imply
proximity of corresponding parametrizations. We present pathologies which
prevent inverse stability in general, and, for shallow networks, proceed to
establish a restricted space of parametrizations on which we have inverse
stability w.r.t. to a Sobolev norm. Furthermore, we show that by optimizing
over such restricted sets, it is still possible to learn any function which can
be learned by optimization over unrestricted sets.Comment: Accepted at NeurIPS 201
Minimum-cost multicast over coded packet networks
We consider the problem of establishing minimum-cost multicast connections over coded packet networks, i.e., packet networks where the contents of outgoing packets are arbitrary, causal functions of the contents of received packets. We consider both wireline and wireless packet networks as well as both static multicast (where membership of the multicast group remains constant for the duration of the connection) and dynamic multicast (where membership of the multicast group changes in time, with nodes joining and leaving the group). For static multicast, we reduce the problem to a polynomial-time solvable optimization problem, and we present decentralized algorithms for solving it. These algorithms, when coupled with existing decentralized schemes for constructing network codes, yield a fully decentralized approach for achieving minimum-cost multicast. By contrast, establishing minimum-cost static multicast connections over routed packet networks is a very difficult problem even using centralized computation, except in the special cases of unicast and broadcast connections. For dynamic multicast, we reduce the problem to a dynamic programming problem and apply the theory of dynamic programming to suggest how it may be solved
- …