138 research outputs found
Subspace System Identification via Weighted Nuclear Norm Optimization
We present a subspace system identification method based on weighted nuclear
norm approximation. The weight matrices used in the nuclear norm minimization
are the same weights as used in standard subspace identification methods. We
show that the inclusion of the weights improves the performance in terms of fit
on validation data. As a second benefit, the weights reduce the size of the
optimization problems that need to be solved. Experimental results from
randomly generated examples as well as from the Daisy benchmark collection are
reported. The key to an efficient implementation is the use of the alternating
direction method of multipliers to solve the optimization problem.Comment: Submitted to IEEE Conference on Decision and Contro
The Securities and Exchange Commission: Its Organization and Functions Under the Securities Act of 1933
As the semidefinite programs that result from integral quadratic contstraints are usually large it is important to implement efficient algorithms. The interior-point algorithms in this paper are primal-dual potential reduction methods and handle multiple constraints. Two approaches are made. For the first approach the computational cost is dominated by a least-squares problem that has to be solved in each iteration. The least squares problem is solved using an iterative method, namely the conjugate gradient method. The computational effort for the second approach is dominated by forming a linear system of equations. This systems of equations is used to compute the search direction in each iteration. If the number of variables are reduced by solving a smaller subproblem the resulting system has a very nice structure and can be solved efficiently. The first approach is more efficient for larger problems but is not as numerically stable
Regression Estimates of Damages in Price-Fixing Cases
Semidefinite programs (SDPs) originating from the Kalman-Yakubovich-Popov lemma often have a large number of variables. Standard solvers for semidefinite programs cannot handle problems of this size. Much research has been invested in developing customized solvers for such problems. In this paper we show that it is possible to use standard primal-dual SDP solvers if we reduce the number of variables in the dual SDP. The interesting variables in the primal SDP can be recovered from the solution
Recommended from our members
T-optimal designs formulti-factor polynomial regressionmodelsvia a semidefinite relaxation method
We consider T-optimal experiment design problems for discriminating multi-factor polynomial regression models wherethe design space is defined by polynomial inequalities and the regression parameters are constrained to given convex sets.Our proposed optimality criterion is formulated as a convex optimization problem with a moment cone constraint. When theregression models have one factor, an exact semidefinite representation of the moment cone constraint can be applied to obtainan equivalent semidefinite program.When there are two or more factors in the models, we apply a moment relaxation techniqueand approximate the moment cone constraint by a hierarchy of semidefinite-representable outer approximations. When therelaxation hierarchy converges, an optimal discrimination design can be recovered from the optimal moment matrix, and itsoptimality can be additionally confirmed by an equivalence theorem. The methodology is illustrated with several examples
Linear optimization over homogeneous matrix cones
A convex cone is homogeneous if its automorphism group acts transitively on
the interior of the cone, i.e., for every pair of points in the interior of the
cone, there exists a cone automorphism that maps one point to the other. Cones
that are homogeneous and self-dual are called symmetric. The symmetric cones
include the positive semidefinite matrix cone and the second order cone as
important practical examples. In this paper, we consider the less well-studied
conic optimization problems over cones that are homogeneous but not necessarily
self-dual. We start with cones of positive semidefinite symmetric matrices with
a given sparsity pattern. Homogeneous cones in this class are characterized by
nested block-arrow sparsity patterns, a subset of the chordal sparsity
patterns. We describe transitive subsets of the automorphism groups of the
cones and their duals, and important properties of the composition of log-det
barrier functions with the automorphisms in this set. Next, we consider
extensions to linear slices of the positive semidefinite cone, i.e.,
intersection of the positive semidefinite cone with a linear subspace, and
review conditions that make the cone homogeneous. In the third part of the
paper we give a high-level overview of the classical algebraic theory of
homogeneous cones due to Vinberg and Rothaus. A fundamental consequence of this
theory is that every homogeneous cone admits a spectrahedral (linear matrix
inequality) representation. We conclude by discussing the role of homogeneous
cone structure in primal-dual symmetric interior-point methods.Comment: 59 pages, 10 figures, to appear in Acta Numeric
ارزيابي اثربخشي دوره هاي آموزش ضمن خدمت کارکنان دانشگاه علوم پزشکی قزوین بر اساس مدل ارزشيابي كرك پاتريك
We present a system identification method for problems with partially missing inputs and outputs. The method is based on a subspace formulation and uses the nuclear norm heuristic for structured low-rank matrix approximation, with the missing input and output values as the optimization variables. We also present a fast implementation of the alternating direction method of multipliers (ADMM) to solve regularized or non-regularized nuclear norm optimization problems with Hankel structure. This makes it possible to solve quite large system identification problems. Experimental results show that the nuclear norm optimization approach to subspace identification is comparable to the standard subspace methods when no inputs and outputs are missing, and that the performance degrades gracefully as the percentage of missing inputs and outputs increases.Funding Agencies|National Science Foundation|1128817|</p
- …