27 research outputs found
Block Factor-width-two Matrices and Their Applications to Semidefinite and Sum-of-squares Optimization
Semidefinite and sum-of-squares (SOS) optimization are fundamental
computational tools in many areas, including linear and nonlinear systems
theory. However, the scale of problems that can be addressed reliably and
efficiently is still limited. In this paper, we introduce a new notion of
\emph{block factor-width-two matrices} and build a new hierarchy of inner and
outer approximations of the cone of positive semidefinite (PSD) matrices. This
notion is a block extension of the standard factor-width-two matrices, and
allows for an improved inner-approximation of the PSD cone. In the context of
SOS optimization, this leads to a block extension of the \emph{scaled
diagonally dominant sum-of-squares (SDSOS)} polynomials. By varying a matrix
partition, the notion of block factor-width-two matrices can balance a
trade-off between the computation scalability and solution quality for solving
semidefinite and SOS optimization. Numerical experiments on large-scale
instances confirm our theoretical findings.Comment: 26 pages, 5 figures. Added a new section on the approximation quality
analysis using block factor-width-two matrices. Code is available through
https://github.com/zhengy09/SDPf
Convex Optimization for Linear Query Processing under Approximate Differential Privacy
Differential privacy enables organizations to collect accurate aggregates
over sensitive data with strong, rigorous guarantees on individuals' privacy.
Previous work has found that under differential privacy, computing multiple
correlated aggregates as a batch, using an appropriate \emph{strategy}, may
yield higher accuracy than computing each of them independently. However,
finding the best strategy that maximizes result accuracy is non-trivial, as it
involves solving a complex constrained optimization program that appears to be
non-linear and non-convex. Hence, in the past much effort has been devoted in
solving this non-convex optimization program. Existing approaches include
various sophisticated heuristics and expensive numerical solutions. None of
them, however, guarantees to find the optimal solution of this optimization
problem.
This paper points out that under (, )-differential privacy,
the optimal solution of the above constrained optimization problem in search of
a suitable strategy can be found, rather surprisingly, by solving a simple and
elegant convex optimization program. Then, we propose an efficient algorithm
based on Newton's method, which we prove to always converge to the optimal
solution with linear global convergence rate and quadratic local convergence
rate. Empirical evaluations demonstrate the accuracy and efficiency of the
proposed solution.Comment: to appear in ACM SIGKDD 201
Quantum theory in finite dimension cannot explain every general process with finite memory
Arguably, the largest class of stochastic processes generated by means of a
finite memory consists of those that are sequences of observations produced by
sequential measurements in a suitable generalized probabilistic theory (GPT).
These are constructed from a finite-dimensional memory evolving under a set of
possible linear maps, and with probabilities of outcomes determined by linear
functions of the memory state. Examples of such models are given by classical
hidden Markov processes, where the memory state is a probability distribution,
and at each step it evolves according to a non-negative matrix, and hidden
quantum Markov processes, where the memory state is a finite dimensional
quantum state, and at each step it evolves according to a completely positive
map. Here we show that the set of processes admitting a finite-dimensional
explanation do not need to be explainable in terms of either classical
probability or quantum mechanics. To wit, we exhibit families of processes that
have a finite-dimensional explanation, defined manifestly by the dynamics of
explicitly given GPT, but that do not admit a quantum, and therefore not even
classical, explanation in finite dimension. Furthermore, we present a family of
quantum processes on qubits and qutrits that do not admit a classical
finite-dimensional realization, which includes examples introduced earlier by
Fox, Rubin, Dharmadikari and Nadkarni as functions of infinite dimensional
Markov chains, and lower bound the size of the memory of a classical model
realizing a noisy version of the qubit processes.Comment: 18 pages, 0 figure
Generalized power cones: optimal error bounds and automorphisms
Error bounds are a requisite for trusting or distrusting solutions in an
informed way. Until recently, provable error bounds in the absence of
constraint qualifications were unattainable for many classes of cones that do
not admit projections with known succinct expressions. We build such error
bounds for the generalized power cones, using the recently developed framework
of one-step facial residual functions. We also show that our error bounds are
tight in the sense of that framework. Besides their utility for understanding
solution reliability, the error bounds we discover have additional applications
to the algebraic structure of the underlying cone, which we describe. In
particular we use the error bounds to compute the dimension of the automorphism
group for the generalized power cones, and to identify a set of generalized
power cones that are self-dual, irreducible, nonhomogeneous, and perfectComment: 24 pages, title change, some minor fixes throughout the paper and
removed the appendix. Comments welcom