61,372 research outputs found

    Robust Adaptive Beamforming for General-Rank Signal Model with Positive Semi-Definite Constraint via POTDC

    Full text link
    The robust adaptive beamforming (RAB) problem for general-rank signal model with an additional positive semi-definite constraint is considered. Using the principle of the worst-case performance optimization, such RAB problem leads to a difference-of-convex functions (DC) optimization problem. The existing approaches for solving the resulted non-convex DC problem are based on approximations and find only suboptimal solutions. Here we solve the non-convex DC problem rigorously and give arguments suggesting that the solution is globally optimal. Particularly, we rewrite the problem as the minimization of a one-dimensional optimal value function whose corresponding optimization problem is non-convex. Then, the optimal value function is replaced with another equivalent one, for which the corresponding optimization problem is convex. The new one-dimensional optimal value function is minimized iteratively via polynomial time DC (POTDC) algorithm.We show that our solution satisfies the Karush-Kuhn-Tucker (KKT) optimality conditions and there is a strong evidence that such solution is also globally optimal. Towards this conclusion, we conjecture that the new optimal value function is a convex function. The new RAB method shows superior performance compared to the other state-of-the-art general-rank RAB methods.Comment: 29 pages, 7 figures, 2 tables, Submitted to IEEE Trans. Signal Processing on August 201

    A note on preconditioning weighted linear least squares, with consequences for weakly-constrained variational data assimilation

    Get PDF
    The effect of preconditioning linear weighted least-squares using an approximation of the model matrix is analyzed, showing the interplay of the eigenstructures of both the model and weighting matrices. A small example is given illustrating the resulting potential inefficiency of such preconditioners. Consequences of these results in the context of the weakly-constrained 4D-Var data assimilation problem are finally discussed.Comment: 10 pages, 2 figure

    A second derivative SQP method: local convergence

    Get PDF
    In [19], we gave global convergence results for a second-derivative SQP method for minimizing the exact ℓ1-merit function for a fixed value of the penalty parameter. To establish this result, we used the properties of the so-called Cauchy step, which was itself computed from the so-called predictor step. In addition, we allowed for the computation of a variety of (optional) SQP steps that were intended to improve the efficiency of the algorithm. \ud \ud Although we established global convergence of the algorithm, we did not discuss certain aspects that are critical when developing software capable of solving general optimization problems. In particular, we must have strategies for updating the penalty parameter and better techniques for defining the positive-definite matrix Bk used in computing the predictor step. In this paper we address both of these issues. We consider two techniques for defining the positive-definite matrix Bk—a simple diagonal approximation and a more sophisticated limited-memory BFGS update. We also analyze a strategy for updating the penalty paramter based on approximately minimizing the ℓ1-penalty function over a sequence of increasing values of the penalty parameter.\ud \ud Algorithms based on exact penalty functions have certain desirable properties. To be practical, however, these algorithms must be guaranteed to avoid the so-called Maratos effect. We show that a nonmonotone varient of our algorithm avoids this phenomenon and, therefore, results in asymptotically superlinear local convergence; this is verified by preliminary numerical results on the Hock and Shittkowski test set

    The Matrix Ridge Approximation: Algorithms and Applications

    Full text link
    We are concerned with an approximation problem for a symmetric positive semidefinite matrix due to motivation from a class of nonlinear machine learning methods. We discuss an approximation approach that we call {matrix ridge approximation}. In particular, we define the matrix ridge approximation as an incomplete matrix factorization plus a ridge term. Moreover, we present probabilistic interpretations using a normal latent variable model and a Wishart model for this approximation approach. The idea behind the latent variable model in turn leads us to an efficient EM iterative method for handling the matrix ridge approximation problem. Finally, we illustrate the applications of the approximation approach in multivariate data analysis. Empirical studies in spectral clustering and Gaussian process regression show that the matrix ridge approximation with the EM iteration is potentially useful

    A new family of high-resolution multivariate spectral estimators

    Full text link
    In this paper, we extend the Beta divergence family to multivariate power spectral densities. Similarly to the scalar case, we show that it smoothly connects the multivariate Kullback-Leibler divergence with the multivariate Itakura-Saito distance. We successively study a spectrum approximation problem, based on the Beta divergence family, which is related to a multivariate extension of the THREE spectral estimation technique. It is then possible to characterize a family of solutions to the problem. An upper bound on the complexity of these solutions will also be provided. Simulations suggest that the most suitable solution of this family depends on the specific features required from the estimation problem
    • …
    corecore