5,917,764 research outputs found

    Effective Condition Number Bounds for Convex Regularization

    Get PDF
    We derive bounds relating Renegar's condition number to quantities that govern the statistical performance of convex regularization in settings that include the 1\ell_1-analysis setting. Using results from conic integral geometry, we show that the bounds can be made to depend only on a random projection, or restriction, of the analysis operator to a lower dimensional space, and can still be effective if these operators are ill-conditioned. As an application, we get new bounds for the undersampling phase transition of composite convex regularizers. Key tools in the analysis are Slepian's inequality and the kinematic formula from integral geometry.Comment: 17 pages, 4 figures . arXiv admin note: text overlap with arXiv:1408.301

    Minimizing the Euclidean Condition Number

    Get PDF
    This paper considers the problem of determining the row and/or column scaling of a matrix A that minimizes the condition number of the scaled matrix. This problem has been studied by many authors. For the cases of the ∞-norm and the 1-norm, the scaling problem was completely solved in the 1960s. It is the Euclidean norm case that has widespread application in robust control analyses. For example, it is used for integral controllability tests based on steady-state information, for the selection of sensors and actuators based on dynamic information, and for studying the sensitivity of stability to uncertainty in control systems. Minimizing the scaled Euclidean condition number has been an open question—researchers proposed approaches to solving the problem numerically, but none of the proposed numerical approaches guaranteed convergence to the true minimum. This paper provides a convex optimization procedure to determine the scalings that minimize the Euclidean condition number. This optimization can be solved in polynomial-time with off-the-shelf software

    Convexity properties of the condition number

    Get PDF
    We define in the space of n by m matrices of rank n, n less or equal than m, the condition Riemannian structure as follows: For a given matrix A the tangent space of A is equipped with the Hermitian inner product obtained by multiplying the usual Frobenius inner product by the inverse of the square of the smallest singular value of A denoted sigma_n(A). When this smallest singular value has multiplicity 1, the function A -> log (sigma_n(A)^(-2)) is a convex function with respect to the condition Riemannian structure that is t -> log (sigma_n(A(t))^(-2)) is convex, in the usual sense for any geodesic A(t). In a more abstract setting, a function alpha defined on a Riemannian manifold (M,) is said to be self-convex when log alpha (gamma(t)) is convex for any geodesic in (M,). Necessary and sufficient conditions for self-convexity are given when alpha is C^2. When alpha(x) = d(x,N)^(-2) where d(x,N) is the distance from x to a C^2 submanifold N of R^j we prove that alpha is self-convex when restricted to the largest open set of points x where there is a unique closest point in N to x. We also show, using this more general notion, that the square of the condition number ||A|||_F / sigma_n(A) is self-convex in projective space and the solution variety.Comment: This article was improved for readbility, following referee suggestion

    The condition number for circulant networks

    Get PDF
    In this paper we prove that it is possible to use techniques specific to electromagnetic field synthesis in the study of some electrical circuits. The definition of a circulant network will be presented. The system matrix of such a network is a circular matrix, which allows an analytical evaluation of all the eigenvalues and all the singular values. Resonance frequencies can then be calculated exactly as will be demonstrated on passive and active circulant networks

    The condition number of join decompositions

    Full text link
    The join set of a finite collection of smooth embedded submanifolds of a mutual vector space is defined as their Minkowski sum. Join decompositions generalize some ubiquitous decompositions in multilinear algebra, namely tensor rank, Waring, partially symmetric rank and block term decompositions. This paper examines the numerical sensitivity of join decompositions to perturbations; specifically, we consider the condition number for general join decompositions. It is characterized as a distance to a set of ill-posed points in a supplementary product of Grassmannians. We prove that this condition number can be computed efficiently as the smallest singular value of an auxiliary matrix. For some special join sets, we characterized the behavior of sequences in the join set converging to the latter's boundary points. Finally, we specialize our discussion to the tensor rank and Waring decompositions and provide several numerical experiments confirming the key results

    A condition number for the tensor rank decomposition

    Get PDF
    The tensor rank decomposition problem consists of recovering the unique set of parameters representing a robustly identifiable low-rank tensor when the coordinate representation of the tensor is presented as input. A condition number for this problem measuring the sensitivity of the parameters to an infinitesimal change to the tensor is introduced and analyzed. It is demonstrated that the absolute condition number coincides with the inverse of the least singular value of Terracini's matrix. Several basic properties of this condition number are investigated.Comment: 45 pages, 4 figure

    Effective condition number bounds for convex regularization

    Get PDF
    We derive bounds relating Renegar's condition number to quantities that govern the statistical performance of convex regularization in settings that include the ℓ 1 -analysis setting. Using results from conic integral geometry, we show that the bounds can be made to depend only on a random projection, or restriction, of the analysis operator to a lower dimensional space, and can still be effective if these operators are ill-conditioned. As an application, we get new bounds for the undersampling phase transition of composite convex regularizers. Key tools in the analysis are Slepian's inequality and the kinematic formula from integral geometry
    corecore