59 research outputs found
Multiscale methods for the solution of the Helmholtz and Laplace equations
This paper presents some numerical results about applications of multiscale techniques to boundary integral equations. The numerical schemes developed here are to some extent based on the results of the papers [6]â[10]. Section 2 deals with a short description of the theory of generalized Petrov-Galerkin methods for elliptic periodic pseudodifferential equations in covering classical Galerkin schemes, collocation, and other methods. A general setting of multiresolution analysis generated by periodized scaling functions as well as a general stability and convergence theory for such a framework is outlined. The key to the stability analysis is a local principle due to one of the authors. Its applicability relies here on a sufficiently general version of a so-called discrete commutator property of wavelet bases (see [6]). These results establish important prerequisites for developing and analysing methods for the fast solution of the resulting linear systems (Section 2.4). The crucial fact which is exploited by these methods is that the stiffness matrices relative to an appropriate wavelet basis can be approximated well by a sparse matrix while the solution to the perturbed problem still exhibits the same asymptotic accuracy as the solution to the full discrete problem. It can be shown (see [7]) that the amount of the overall computational work which is needed to realize a required accuracy is of the order , where is the number of unknowns and is some real number
A range division and contraction approach for nonconvex quadratic program with quadratic constraints
Complexity Issues in Global Optimization: A Survey
Introduction Complexity theory refers to the asymptotic analysis of problems and algorithms. How efficient is an algorithm for a particular optimization problem, as the number of variables gets large? Are there problems for which no efficient algorithm exists? These are the questions that complexity theory attempts to address. The theory originated in work by Hartmanis and Stearns (1965). By now there is much known about complexity issues in nonlinear optimization. In particular, our recent book Vavasis (1991) contains all the details on many of the results surveyed in this chapter. We begin the discussion with a look at convex problems in the next section. These problems generally have efficient algorithms. In Section 3 we study the complexity of two nonconvex problems that also have efficient algorithms because of special structure. In Section 4, we look into hardness results (proofs of the nonexistence of efficient algorithms) for general nonconvex problems. Finally, in Se
Non-negative Spectral Learning for Linear Sequential Systems
International audienceMethod of moments (MoM) has recently become an appealing alternative to standard iterative approaches like Expectation Maximization (EM) to learn latent variable models. In addition, MoM-based algorithms come with global convergence guarantees in the form of finite sample bounds. However, given enough computation time, by using restarts and heuristics to avoid local optima, iterative approaches often achieve better performance. We believe that this performance gap is in part due to the fact that MoM-based algorithms can output negative probabilities. By constraining the search space, we propose a non-negative spectral algorithm (NNSpectral) avoiding computing negative probabilities by design. NNSpectral is compared to other MoM-based algorithms and EM on synthetic problems of the PAutomaC challenge. Not only, NNSpectral outperforms other MoM-based algorithms, but also, achieves very competitive results in comparison to EM
- âŠ