2,459 research outputs found

    An optimal subgradient algorithm for large-scale convex optimization in simple domains

    Full text link
    This paper shows that the optimal subgradient algorithm, OSGA, proposed in \cite{NeuO} can be used for solving structured large-scale convex constrained optimization problems. Only first-order information is required, and the optimal complexity bounds for both smooth and nonsmooth problems are attained. More specifically, we consider two classes of problems: (i) a convex objective with a simple closed convex domain, where the orthogonal projection on this feasible domain is efficiently available; (ii) a convex objective with a simple convex functional constraint. If we equip OSGA with an appropriate prox-function, the OSGA subproblem can be solved either in a closed form or by a simple iterative scheme, which is especially important for large-scale problems. We report numerical results for some applications to show the efficiency of the proposed scheme. A software package implementing OSGA for above domains is available

    Multidimensional Scaling with Regional Restrictions for Facet Theory: An Application to Levi's Political Protest Data

    Get PDF
    Multidimensional scaling (MDS) is often used for the analysis of correlation matrices of items generated by a facet theory design. The emphasis of the analysis is on regional hypotheses on the location of the items in the MDS solution. An important regional hypothesis is the axial constraint where the items from different levels of a facet are assumed to be located in different parallel slices. The simplest approach is to do an MDS and draw the parallel lines separating the slices as good as possible by hand. Alternatively, Borg and Shye (1995) propose to automate the second step. Borg and Groenen (1997, 2005) proposed a simultaneous approach for ordered facets when the number of MDS dimensions equals the number of facets. In this paper, we propose a new algorithm that estimates an MDS solution subject to axial constraints without the restriction that the number of facets equals the number of dimensions. The algorithm is based on constrained iterative majorization of De Leeuw and Heiser (1980) with special constraints. This algorithm is applied to Levi’s (1983) data on political protests.Axial Partitioning;Constrained Estimation;Facet Theory;Iterative Majorization;Multidimensional Scaling;Regional Restrictions

    Blind adaptive constrained reduced-rank parameter estimation based on constant modulus design for CDMA interference suppression

    Get PDF
    This paper proposes a multistage decomposition for blind adaptive parameter estimation in the Krylov subspace with the code-constrained constant modulus (CCM) design criterion. Based on constrained optimization of the constant modulus cost function and utilizing the Lanczos algorithm and Arnoldi-like iterations, a multistage decomposition is developed for blind parameter estimation. A family of computationally efficient blind adaptive reduced-rank stochastic gradient (SG) and recursive least squares (RLS) type algorithms along with an automatic rank selection procedure are also devised and evaluated against existing methods. An analysis of the convergence properties of the method is carried out and convergence conditions for the reduced-rank adaptive algorithms are established. Simulation results consider the application of the proposed techniques to the suppression of multiaccess and intersymbol interference in DS-CDMA systems

    Alternating least squares as moving subspace correction

    Full text link
    In this note we take a new look at the local convergence of alternating optimization methods for low-rank matrices and tensors. Our abstract interpretation as sequential optimization on moving subspaces yields insightful reformulations of some known convergence conditions that focus on the interplay between the contractivity of classical multiplicative Schwarz methods with overlapping subspaces and the curvature of low-rank matrix and tensor manifolds. While the verification of the abstract conditions in concrete scenarios remains open in most cases, we are able to provide an alternative and conceptually simple derivation of the asymptotic convergence rate of the two-sided block power method of numerical algebra for computing the dominant singular subspaces of a rectangular matrix. This method is equivalent to an alternating least squares method applied to a distance function. The theoretical results are illustrated and validated by numerical experiments.Comment: 20 pages, 4 figure
    • …
    corecore