673 research outputs found
Two Optimal Strategies for Active Learning of Causal Models from Interventional Data
From observational data alone, a causal DAG is only identifiable up to Markov
equivalence. Interventional data generally improves identifiability; however,
the gain of an intervention strongly depends on the intervention target, that
is, the intervened variables. We present active learning (that is, optimal
experimental design) strategies calculating optimal interventions for two
different learning goals. The first one is a greedy approach using
single-vertex interventions that maximizes the number of edges that can be
oriented after each intervention. The second one yields in polynomial time a
minimum set of targets of arbitrary size that guarantees full identifiability.
This second approach proves a conjecture of Eberhardt (2008) indicating the
number of unbounded intervention targets which is sufficient and in the worst
case necessary for full identifiability. In a simulation study, we compare our
two active learning approaches to random interventions and an existing
approach, and analyze the influence of estimation errors on the overall
performance of active learning
Distributed Design for Decentralized Control using Chordal Decomposition and ADMM
We propose a distributed design method for decentralized control by
exploiting the underlying sparsity properties of the problem. Our method is
based on chordal decomposition of sparse block matrices and the alternating
direction method of multipliers (ADMM). We first apply a classical
parameterization technique to restrict the optimal decentralized control into a
convex problem that inherits the sparsity pattern of the original problem. The
parameterization relies on a notion of strongly decentralized stabilization,
and sufficient conditions are discussed to guarantee this notion. Then, chordal
decomposition allows us to decompose the convex restriction into a problem with
partially coupled constraints, and the framework of ADMM enables us to solve
the decomposed problem in a distributed fashion. Consequently, the subsystems
only need to share their model data with their direct neighbours, not needing a
central computation. Numerical experiments demonstrate the effectiveness of the
proposed method.Comment: 11 pages, 8 figures. Accepted for publication in the IEEE
Transactions on Control of Network System
Improving Efficiency and Scalability of Sum of Squares Optimization: Recent Advances and Limitations
It is well-known that any sum of squares (SOS) program can be cast as a
semidefinite program (SDP) of a particular structure and that therein lies the
computational bottleneck for SOS programs, as the SDPs generated by this
procedure are large and costly to solve when the polynomials involved in the
SOS programs have a large number of variables and degree. In this paper, we
review SOS optimization techniques and present two new methods for improving
their computational efficiency. The first method leverages the sparsity of the
underlying SDP to obtain computational speed-ups. Further improvements can be
obtained if the coefficients of the polynomials that describe the problem have
a particular sparsity pattern, called chordal sparsity. The second method
bypasses semidefinite programming altogether and relies instead on solving a
sequence of more tractable convex programs, namely linear and second order cone
programs. This opens up the question as to how well one can approximate the
cone of SOS polynomials by second order representable cones. In the last part
of the paper, we present some recent negative results related to this question.Comment: Tutorial for CDC 201
- …