45,964 research outputs found

    An efficient sieving based secant method for sparse optimization problems with least-squares constraints

    Full text link
    In this paper, we propose an efficient sieving based secant method to address the computational challenges of solving sparse optimization problems with least-squares constraints. A level-set method has been introduced in [X. Li, D.F. Sun, and K.-C. Toh, SIAM J. Optim., 28 (2018), pp. 1842--1866] that solves these problems by using the bisection method to find a root of a univariate nonsmooth equation φ(λ)=ϱ\varphi(\lambda) = \varrho for some ϱ>0\varrho > 0, where φ()\varphi(\cdot) is the value function computed by a solution of the corresponding regularized least-squares optimization problem. When the objective function in the constrained problem is a polyhedral gauge function, we prove that (i) for any positive integer kk, φ()\varphi(\cdot) is piecewise CkC^k in an open interval containing the solution λ\lambda^* to the equation φ(λ)=ϱ\varphi(\lambda) = \varrho; (ii) the Clarke Jacobian of φ()\varphi(\cdot) is always positive. These results allow us to establish the essential ingredients of the fast convergence rates of the secant method. Moreover, an adaptive sieving technique is incorporated into the secant method to effectively reduce the dimension of the level-set subproblems for computing the value of φ()\varphi(\cdot). The high efficiency of the proposed algorithm is demonstrated by extensive numerical results

    Large Scale Computational Problems in Numerical Optimization

    Get PDF
    Our work under this support broadly falls into five categories: automatic differentiation, sparsity, constraints, parallel computation, and applications. Automatic Differentiation (AD): We developed strong practical methods for computing sparse Jacobian and Hessian matrices which arise frequently in large scale optimization problems [10,35]. In addition, we developed a novel view of "structure" in applied problems along with AD techniques that allowed for the efficient application of sparse AD techniques to dense, but structured, problems. Our AD work included development of freely available MATLAB AD software. Sparsity: We developed new effective and practical techniques for exploiting sparsity when solving a variety of optimization problems. These problems include: bound constrained problems, robust regression problems, the null space problem, and sparse orthogonal factorization. Our sparsity work included development of freely available and published software [38,39]. Constraints: Effectively handling constraints in large scale optimization remains a challenge. We developed a number of new approaches to constrained problems with emphasis on trust region methodologies. Parallel Computation: Our work included the development of specifically parallel techniques for the linear algebra tasks underpinning optimization algorithms. Our work contributed to the nonlinear least-squares problem, nonlinear equations, triangular systems, orthogonalization, and linear programming. Applications: Our optimization work is broadly applicable across numerous application domains. Nevertheless we have specifically worked in several application areas including molecular conformation, molecular energy minimization, computational finance, and bone remodeling

    Sparsity-Cognizant Total Least-Squares for Perturbed Compressive Sampling

    Full text link
    Solving linear regression problems based on the total least-squares (TLS) criterion has well-documented merits in various applications, where perturbations appear both in the data vector as well as in the regression matrix. However, existing TLS approaches do not account for sparsity possibly present in the unknown vector of regression coefficients. On the other hand, sparsity is the key attribute exploited by modern compressive sampling and variable selection approaches to linear regression, which include noise in the data, but do not account for perturbations in the regression matrix. The present paper fills this gap by formulating and solving TLS optimization problems under sparsity constraints. Near-optimum and reduced-complexity suboptimum sparse (S-) TLS algorithms are developed to address the perturbed compressive sampling (and the related dictionary learning) challenge, when there is a mismatch between the true and adopted bases over which the unknown vector is sparse. The novel S-TLS schemes also allow for perturbations in the regression matrix of the least-absolute selection and shrinkage selection operator (Lasso), and endow TLS approaches with ability to cope with sparse, under-determined "errors-in-variables" models. Interesting generalizations can further exploit prior knowledge on the perturbations to obtain novel weighted and structured S-TLS solvers. Analysis and simulations demonstrate the practical impact of S-TLS in calibrating the mismatch effects of contemporary grid-based approaches to cognitive radio sensing, and robust direction-of-arrival estimation using antenna arrays.Comment: 30 pages, 10 figures, submitted to IEEE Transactions on Signal Processin

    Correntropy Maximization via ADMM - Application to Robust Hyperspectral Unmixing

    Full text link
    In hyperspectral images, some spectral bands suffer from low signal-to-noise ratio due to noisy acquisition and atmospheric effects, thus requiring robust techniques for the unmixing problem. This paper presents a robust supervised spectral unmixing approach for hyperspectral images. The robustness is achieved by writing the unmixing problem as the maximization of the correntropy criterion subject to the most commonly used constraints. Two unmixing problems are derived: the first problem considers the fully-constrained unmixing, with both the non-negativity and sum-to-one constraints, while the second one deals with the non-negativity and the sparsity-promoting of the abundances. The corresponding optimization problems are solved efficiently using an alternating direction method of multipliers (ADMM) approach. Experiments on synthetic and real hyperspectral images validate the performance of the proposed algorithms for different scenarios, demonstrating that the correntropy-based unmixing is robust to outlier bands.Comment: 23 page
    corecore