673 research outputs found

    An inexact Newton-Krylov algorithm for constrained diffeomorphic image registration

    Full text link
    We propose numerical algorithms for solving large deformation diffeomorphic image registration problems. We formulate the nonrigid image registration problem as a problem of optimal control. This leads to an infinite-dimensional partial differential equation (PDE) constrained optimization problem. The PDE constraint consists, in its simplest form, of a hyperbolic transport equation for the evolution of the image intensity. The control variable is the velocity field. Tikhonov regularization on the control ensures well-posedness. We consider standard smoothness regularization based on H1H^1- or H2H^2-seminorms. We augment this regularization scheme with a constraint on the divergence of the velocity field rendering the deformation incompressible and thus ensuring that the determinant of the deformation gradient is equal to one, up to the numerical error. We use a Fourier pseudospectral discretization in space and a Chebyshev pseudospectral discretization in time. We use a preconditioned, globalized, matrix-free, inexact Newton-Krylov method for numerical optimization. A parameter continuation is designed to estimate an optimal regularization parameter. Regularity is ensured by controlling the geometric properties of the deformation field. Overall, we arrive at a black-box solver. We study spectral properties of the Hessian, grid convergence, numerical accuracy, computational efficiency, and deformation regularity of our scheme. We compare the designed Newton-Krylov methods with a globalized preconditioned gradient descent. We study the influence of a varying number of unknowns in time. The reported results demonstrate excellent numerical accuracy, guaranteed local deformation regularity, and computational efficiency with an optional control on local mass conservation. The Newton-Krylov methods clearly outperform the Picard method if high accuracy of the inversion is required.Comment: 32 pages; 10 figures; 9 table

    Shape Matching and Object Recognition

    Get PDF
    We approach recognition in the framework of deformable shape matching, relying on a new algorithm for finding correspondences between feature points. This algorithm sets up correspondence as an integer quadratic programming problem, where the cost function has terms based on similarity of corresponding geometric blur point descriptors as well as the geometric distortion between pairs of corresponding feature points. The algorithm handles outliers, and thus enables matching of exemplars to query images in the presence of occlusion and clutter. Given the correspondences, we estimate an aligning transform, typically a regularized thin plate spline, resulting in a dense correspondence between the two shapes. Object recognition is handled in a nearest neighbor framework where the distance between exemplar and query is the matching cost between corresponding points. We show results on two datasets. One is the Caltech 101 dataset (Li, Fergus and Perona), a challenging dataset with large intraclass variation. Our approach yields a 45 % correct classification rate in addition to localization. We also show results for localizing frontal and profile faces that are comparable to special purpose approaches tuned to faces

    Similarity in metaheuristics:a gentle step towards a comparison methodology

    Get PDF
    Metaheuristics are found to be efficient in different applications where the use of exact algorithms becomes short-handed. In the last decade, many of these algorithms have been introduced and used in a wide range of applications. Nevertheless, most of those approaches share similar components leading to a concern related to their novelty or contribution. Thus, in this paper, a pool template is proposed and used to categorize algorithm components permitting to analyze them in a structured way. We exemplify its use by means of continuous optimization metaheuristics, and provide some measures and methodology to identify their similarities and novelties. Finally, a discussion at a component level is provided in order to point out possible design differences and commonalities

    Convex Optimization in Julia

    Full text link
    This paper describes Convex, a convex optimization modeling framework in Julia. Convex translates problems from a user-friendly functional language into an abstract syntax tree describing the problem. This concise representation of the global structure of the problem allows Convex to infer whether the problem complies with the rules of disciplined convex programming (DCP), and to pass the problem to a suitable solver. These operations are carried out in Julia using multiple dispatch, which dramatically reduces the time required to verify DCP compliance and to parse a problem into conic form. Convex then automatically chooses an appropriate backend solver to solve the conic form problem.Comment: To appear in Proceedings of the Workshop on High Performance Technical Computing in Dynamic Languages (HPTCDL) 201

    New methods for generating populations in Markov network based EDAs: Decimation strategies and model-based template recombination

    Get PDF
    Methods for generating a new population are a fundamental component of estimation of distribution algorithms (EDAs). They serve to transfer the information contained in the probabilistic model to the new generated population. In EDAs based on Markov networks, methods for generating new populations usually discard information contained in the model to gain in efficiency. Other methods like Gibbs sampling use information about all interactions in the model but are computationally very costly. In this paper we propose new methods for generating new solutions in EDAs based on Markov networks. We introduce approaches based on inference methods for computing the most probable configurations and model-based template recombination. We show that the application of different variants of inference methods can increase the EDAs’ convergence rate and reduce the number of function evaluations needed to find the optimum of binary and non-binary discrete functions

    NATURAL ALGORITHMS IN DIGITAL FILTER DESIGN

    Get PDF
    Digital filters are an important part of Digital Signal Processing (DSP), which plays vital roles within the modern world, but their design is a complex task requiring a great deal of specialised knowledge. An analysis of this design process is presented, which identifies opportunities for the application of optimisation. The Genetic Algorithm (GA) and Simulated Annealing are problem-independent and increasingly popular optimisation techniques. They do not require detailed prior knowledge of the nature of a problem, and are unaffected by a discontinuous search space, unlike traditional methods such as calculus and hill-climbing. Potential applications of these techniques to the filter design process are discussed, and presented with practical results. Investigations into the design of Frequency Sampling (FS) Finite Impulse Response (FIR) filters using a hybrid GA/hill-climber proved especially successful, improving on published results. An analysis of the search space for FS filters provided useful information on the performance of the optimisation technique. The ability of the GA to trade off a filter's performance with respect to several design criteria simultaneously, without intervention by the designer, is also investigated. Methods of simplifying the design process by using this technique are presented, together with an analysis of the difficulty of the non-linear FIR filter design problem from a GA perspective. This gave an insight into the fundamental nature of the optimisation problem, and also suggested future improvements. The results gained from these investigations allowed the framework for a potential 'intelligent' filter design system to be proposed, in which embedded expert knowledge, Artificial Intelligence techniques and traditional design methods work together. This could deliver a single tool capable of designing a wide range of filters with minimal human intervention, and of proposing solutions to incomplete problems. It could also provide the basis for the development of tools for other areas of DSP system design

    An Efficient Paradigm for Feasibility Guarantees in Legged Locomotion

    Full text link
    Developing feasible body trajectories for legged systems on arbitrary terrains is a challenging task. Given some contact points, the trajectories for the Center of Mass (CoM) and body orientation, designed to move the robot, must satisfy crucial constraints to maintain balance, and to avoid violating physical actuation and kinematic limits. In this paper, we present a paradigm that allows to design feasible trajectories in an efficient manner. In continuation to our previous work, we extend the notion of the 2D feasible region, where static balance and the satisfaction of actuation limits were guaranteed, whenever the projection of the CoM lies inside the proposed admissible region. We here develop a general formulation of the improved feasible region to guarantee dynamic balance alongside the satisfaction of both actuation and kinematic limits for arbitrary terrains in an efficient manner. To incorporate the feasibility of the kinematic limits, we introduce an algorithm that computes the reachable region of the CoM. Furthermore, we propose an efficient planning strategy that utilizes the improved feasible region to design feasible CoM and body orientation trajectories. Finally, we validate the capabilities of the improved feasible region and the effectiveness of the proposed planning strategy, using simulations and experiments on the HyQ robot and comparing them to a previously developed heuristic approach. Various scenarios and terrains that mimic confined and challenging environments are used for the validation.Comment: 17 pages, 13 figures, submitted to Transaction on Robotic
    corecore