334,890 research outputs found

    Conjugate gradient acceleration of iteratively re-weighted least squares methods

    Full text link
    Iteratively Re-weighted Least Squares (IRLS) is a method for solving minimization problems involving non-quadratic cost functions, perhaps non-convex and non-smooth, which however can be described as the infimum over a family of quadratic functions. This transformation suggests an algorithmic scheme that solves a sequence of quadratic problems to be tackled efficiently by tools of numerical linear algebra. Its general scope and its usually simple implementation, transforming the initial non-convex and non-smooth minimization problem into a more familiar and easily solvable quadratic optimization problem, make it a versatile algorithm. However, despite its simplicity, versatility, and elegant analysis, the complexity of IRLS strongly depends on the way the solution of the successive quadratic optimizations is addressed. For the important special case of compressed sensing\textit{compressed sensing} and sparse recovery problems in signal processing, we investigate theoretically and numerically how accurately one needs to solve the quadratic problems by means of the conjugate gradient\textit{conjugate gradient} (CG) method in each iteration in order to guarantee convergence. The use of the CG method may significantly speed-up the numerical solution of the quadratic subproblems, in particular, when fast matrix-vector multiplication (exploiting for instance the FFT) is available for the matrix involved. In addition, we study convergence rates. Our modified IRLS method outperforms state of the art first order methods such as Iterative Hard Thresholding (IHT) or Fast Iterative Soft-Thresholding Algorithm (FISTA) in many situations, especially in large dimensions. Moreover, IRLS is often able to recover sparse vectors from fewer measurements than required for IHT and FISTA.Comment: 40 page

    The Module Isomorphism Problem Reconsidered

    Get PDF
    Algorithms to decide isomorphism of modules have been honed continually over the last 30 years, and their range of applicability has been extended to include modules over a wide range of rings. Highly efficient computer implementations of these algorithms form the bedrock of systems such as GAP and MAGMA, at least in regard to computations with groups and algebras. By contrast, the fundamental problem of testing for isomorphism between other types of algebraic structures -- such as groups, and almost any type of algebra -- seems today as intractable as ever. What explains the vastly different complexity status of the module isomorphism problem? This paper argues that the apparent discrepancy is explained by nomenclature. Current algorithms to solve module isomorphism, while efficient and immensely useful, are actually solving a highly constrained version of the problem. We report that module isomorphism in its general form is as hard as algebra isomorphism and graph isomorphism, both well-studied problems that are widely regarded as difficult. On a more positive note, for cyclic rings we describe a polynomial-time algorithm for the general module isomorphism problem. We also report on a MAGMA implementation of our algorithm

    Efficient implementation of linear programming decoding

    Full text link
    While linear programming (LP) decoding provides more flexibility for finite-length performance analysis than iterative message-passing (IMP) decoding, it is computationally more complex to implement in its original form, due to both the large size of the relaxed LP problem, and the inefficiency of using general-purpose LP solvers. This paper explores ideas for fast LP decoding of low-density parity-check (LDPC) codes. We first prove, by modifying the previously reported Adaptive LP decoding scheme to allow removal of unnecessary constraints, that LP decoding can be performed by solving a number of LP problems that contain at most one linear constraint derived from each of the parity-check constraints. By exploiting this property, we study a sparse interior-point implementation for solving this sequence of linear programs. Since the most complex part of each iteration of the interior-point algorithm is the solution of a (usually ill-conditioned) system of linear equations for finding the step direction, we propose a preconditioning algorithm to facilitate iterative solution of such systems. The proposed preconditioning algorithm is similar to the encoding procedure of LDPC codes, and we demonstrate its effectiveness via both analytical methods and computer simulation results.Comment: 44 pages, submitted to IEEE Transactions on Information Theory, Dec. 200

    A Predual Proximal Point Algorithm solving a Non Negative Basis Pursuit Denoising model

    No full text
    International audienceThis paper develops an implementation of a Predual Proximal Point Algorithm (PPPA) solving a Non Negative Basis Pursuit Denoising model. The model imposes a constraint on the l2 norm of the residual, instead of penalizing it. The PPPA solves the predual of the problem with a Proximal Point Algorithm (PPA). Moreover, the minimization that needs to be performed at each iteration of PPA is solved with a dual method. We can prove that these dual variables converge to a solution of the initial problem. Our analysis proves that we turn a constrained non differentiable con- vex problem into a short sequence of nice concave maximization problems. By nice, we mean that the functions which are maximized are differen- tiable and their gradient is Lipschitz. The algorithm is easy to implement, easier to tune and more general than the algorithms found in the literature. In particular, it can be ap- plied to the Basis Pursuit Denoising (BPDN) and the Non Negative Basis Pursuit Denoising (NNBPDN) and it does not make any assumption on the dictionary. We prove its convergence to the set of solutions of the model and provide some convergence rates. Experiments on image approximation show that the performances of the PPPA are at the current state of the art for the BPDN

    From Parameter Tuning to Dynamic Heuristic Selection

    Get PDF
    The importance of balance between exploration and exploitation plays a crucial role while solving combinatorial optimization problems. This balance is reached by two general techniques: by using an appropriate problem solver and by setting its proper parameters. Both problems were widely studied in the past and the research process continues up until now. The latest studies in the field of automated machine learning propose merging both problems, solving them at design time, and later strengthening the results at runtime. To the best of our knowledge, the generalized approach for solving the parameter setting problem in heuristic solvers has not yet been proposed. Therefore, the concept of merging heuristic selection and parameter control have not been introduced. In this thesis, we propose an approach for generic parameter control in meta-heuristics by means of reinforcement learning (RL). Making a step further, we suggest a technique for merging the heuristic selection and parameter control problems and solving them at runtime using RL-based hyper-heuristic. The evaluation of the proposed parameter control technique on a symmetric traveling salesman problem (TSP) revealed its applicability by reaching the performance of tuned in online and used in isolation underlying meta-heuristic. Our approach provides the results on par with the best underlying heuristics with tuned parameters.:1 Introduction 1 1.1 Motivation 1 1.2 Research objective 2 1.3 Solution overview 2 2 Background and RelatedWork Analysis 3 2.1 Optimization Problems and their Solvers 3 2.2 Heuristic Solvers for Optimization Problems 9 2.3 Setting Algorithm Parameters 19 2.4 Combined Algorithm Selection and Hyper-Parameter Tuning Problem 27 2.5 Conclusion on Background and Related Work Analysis 28 3 Online Selection Hyper-Heuristic with Generic Parameter Control 31 3.1 Combined Parameter Control and Algorithm Selection Problem 31 3.2 Search Space Structure 32 3.3 Parameter Prediction Process 34 3.4 Low-Level Heuristics 35 3.5 Conclusion of Concept 36 4 Implementation Details 37 4.2 Search Space 40 4.3 Prediction Process 43 4.4 Low Level Heuristics 48 4.5 Conclusion 52 5 Evaluation 55 5.1 Optimization Problem 55 5.2 Environment Setup 56 5.3 Meta-heuristics Tuning 56 5.4 Concept Evaluation 60 5.5 Analysis of HH-PC Settings 74 5.6 Conclusion 79 6 Conclusion 81 7 FutureWork 83 7.1 Prediction Process 83 7.2 Search Space 84 7.3 Evaluations and Benchmarks 84 Bibliography 87 A Evaluation Results 99 A.1 Results in Figures 99 A.2 Results in numbers 10

    Optimal impulse control problems and linear programming.

    Get PDF
    Optimal impulse control problems are, in general, difficult to solve. A current research goal is to isolate those problems that lead to tractable solutions. In this paper, we identify a special class of optimal impulse control problems which are easy to solve. Easy to solve means that solution algorithms are polynomial in time and therefore suitable to the on-line implementation in real-time problems. We do this by using a paradigm borrowed from the Operations Research field. As main result, we present a solution algorithm that converges to the exact solution in polynomial time. Our approach consists in approximating the optimal impulse control problem via a binary linear programming problem with a totally unimodular constraint matrix. Hence, solving the binary linear programming problem is equivalent to solving its linear relaxation. It turns out that any solution of the linear relaxation is a feasible solution for the optimal impulse control problem. Then, given the feasible solution, obtained solving the linear relaxation, we find the optimal solution via local search
    • …
    corecore