218 research outputs found

    Refraction-corrected ray-based inversion for three-dimensional ultrasound tomography of the breast

    Get PDF
    Ultrasound Tomography has seen a revival of interest in the past decade, especially for breast imaging, due to improvements in both ultrasound and computing hardware. In particular, three-dimensional ultrasound tomography, a fully tomographic method in which the medium to be imaged is surrounded by ultrasound transducers, has become feasible. In this paper, a comprehensive derivation and study of a robust framework for large-scale bent-ray ultrasound tomography in 3D for a hemispherical detector array is presented. Two ray-tracing approaches are derived and compared. More significantly, the problem of linking the rays between emitters and receivers, which is challenging in 3D due to the high number of degrees of freedom for the trajectory of rays, is analysed both as a minimisation and as a root-finding problem. The ray-linking problem is parameterised for a convex detection surface and three robust, accurate, and efficient ray-linking algorithms are formulated and demonstrated. To stabilise these methods, novel adaptive-smoothing approaches are proposed that control the conditioning of the update matrices to ensure accurate linking. The nonlinear UST problem of estimating the sound speed was recast as a series of linearised subproblems, each solved using the above algorithms and within a steepest descent scheme. The whole imaging algorithm was demonstrated to be robust and accurate on realistic data simulated using a full-wave acoustic model and an anatomical breast phantom, and incorporating the errors due to time-of-flight picking that would be present with measured data. This method can used to provide a low-artefact, quantitatively accurate, 3D sound speed maps. In addition to being useful in their own right, such 3D sound speed maps can be used to initialise full-wave inversion methods, or as an input to photoacoustic tomography reconstructions

    COMPASS

    Get PDF
    Die vorliegende Arbeit präsentiert COMPASS, einen global konvergenten Lösungsalgorithmus für gemischte Komplementaritätsprobleme. Die zu Grunde liegende math ematische Theorie basiert auf dem PATH Solver, dem standard Lösungsalgorithmus für diese Art von Problemen. COMPASS ist unter der “GNU General Public License” Lizenz veröffentlicht und ist daher Freie Software. Das Fundament von COMPASS ist eine stabilisierte Newton Methode: Das gemischte Komplementaritätsproblem wird in der Form der Normalgleichung (normal equation) reformuliert. Eine allgemeine Approximation erster Ordnung dieser Normalgleichung kann als lineares gemischtes Komplementaritätsproblem dargestellt und mit Hilfe einer Pivot Technik gelöst werden. Diese Lösung entspricht dem Newton Punkt im standard Newton Verfahren, und wird daher hier auch so bezeichnet. In der Pivot Technik wird neben der Lösung auch ein stückweise linearer Pfad generiert, der den letzten Iterationspunkt und den Newton Punkt verbindet. Ob dieser Punkt als nächster Iterationspunkt akzeptiert wird hängt von einem nicht-monotonen Stabilisierungsverfahren ab, das eine Watchdog Technik beinhaltet. Außerdem existiert eine glatte “merit” Funktion, basierend auf einer modifizierten Fischer-Burmeister Funktion, die den Erfolg des Fortschritt misst, und bei der Lösung des Komplementaritätsproblems eine Nullstelle besitzt. Gemäß der Verbesserung des Wertes dieser merit Funktion sind gewisse nicht monotone Abstiegskriterien definiert. Diese werden jedoch nicht in jedem Schritt getestet, um die Anzahl der Funktions- und Gradientenauswertungen zu minimieren. Wenn die Lösung aus der Pivot Technik, also der Newton Punkt, diesen Abstiegskriterien genügt, wird er als neuer Iterationspunkt verwendet. Falls nicht, geht der Algorithmus zurück zum letzten “Checkpoint”, dem letzten Punkt, der dem Test mit dem Abstiegskriterium erfolgreich unterzogen wurde. Der Pfad zwischen diesem Checkpoint, und dem Newton Punkt nach diesem Checkpoint (der Newton Punkt wird nach jedem Checkpoint gespeichert) wird dann nach einem die Abstiegskriterien erfüllenden Punkt durchsucht. Sollte kein passender Punkt gefunden werden, geht der Algorithmus zurück zum “Bestpoint”, dem Punkt mit dem bisher niedrigsten Wert der merit Funktion, und macht einen projizierten Gradientenschritt. Ein globaler Konvergenzbeweis dieser Theorie ist in der Arbeit enthalten. Der Algorithmus wurde im Zuge dieser Arbeit in MATLAB/Octave implementiert, und steht auf http://www.mat.univie.ac.at/~neum/software/compass/COMPASS.html zum Download und zur freien Benutzung zur Verfügung. Eine Simulation wurde anhand von zufällig generierten Problemen durchgeführt und dokumentiert das erfolgreiche Lösen von Problemen des Algorithmus zumindest bis zu einer Größenordnung von 200 Variablen. Eine kurze geschichtliche Einführung über den Zusammenhang zwischen gemischten Komplementaritätsproblemen und ökonomischen Modellen ist in der Arbeit enthalten, sowie eine Anleitung anhand eines Beispiels, wie solche Modelle in der Form von Komplementaritätsproblemen forumliert werden können.This thesis presents COMPASS, a globally convergent algorithm for solving the Mixed Complementarity Problem (MCP). The mathematical theory behind it is based on the PATH solver, the standard solver for complementarity problems. The fundament of COMPASS is a stabilized Newton method; the MCP is reformulated as the problem of finding a zero of a non-smooth vector valued function, the normal equation. A pivot technique is used to solve a general first order approximation of the normal equation at the current point in order to obtain a piecewise linear path connecting consecutive iterates. A general descent framework uses a smooth merit function establishing non-monotone descent criteria; a non-monotone stabilization scheme employs a watchdog technique and pathsearches reducing the number of function and gradient evaluations. An implementation in MATLAB/Octave was developed and is an integral part of this thesis. Simulation results on random problems, as well as a short course on economic models as an example of a field of application are included

    Méthodes sans factorisation pour l’optimisation non linéaire

    Get PDF
    RÉSUMÉ : Cette thèse a pour objectif de formuler mathématiquement, d'analyser et d'implémenter deux méthodes sans factorisation pour l'optimisation non linéaire. Dans les problèmes de grande taille, la jacobienne des contraintes n'est souvent pas disponible sous forme de matrice; seules son action et celle de sa transposée sur un vecteur le sont. L'optimisation sans factorisation consiste alors à utiliser des opérateurs linéaires abstraits représentant la jacobienne ou le hessien. De ce fait, seules les actions > sont autorisées et l'algèbre linéaire directe doit être remplacée par des méthodes itératives. Outre ces restrictions, une grande difficulté lors de l'introduction de méthodes sans factorisation dans des algorithmes d'optimisation concerne le contrôle de l'inexactitude de la résolution des systèmes linéaires. Il faut en effet s'assurer que la direction calculée est suffisamment précise pour garantir la convergence de l'algorithme concerné. En premier lieu, nous décrivons l'implémentation sans factorisation d'une méthode de lagrangien augmenté pouvant utiliser des approximations quasi-Newton des dérivées secondes. Nous montrons aussi que notre approche parvient à résoudre des problèmes d'optimisation de structure avec des milliers de variables et contraintes alors que les méthodes avec factorisation échouent. Afin d'obtenir une méthode possédant une convergence plus rapide, nous présentons ensuite un algorithme qui utilise un lagrangien augmenté proximal comme fonction de mérite et qui, asymptotiquement, se transforme en une méthode de programmation quadratique séquentielle stabilisée. L'utilisation d'approximations BFGS à mémoire limitée du hessien du lagrangien conduit à l'obtention de systèmes linéaires symétriques quasi-définis. Ceux-ci sont interprétés comme étant les conditions d'optimalité d'un problème aux moindres carrés linéaire, qui est résolu de manière inexacte par une méthode de Krylov. L'inexactitude de cette résolution est contrôlée par un critère d'arrêt facile à mettre en œuvre. Des tests numériques démontrent l'efficacité et la robustesse de notre méthode, qui se compare très favorablement à IPOPT, en particulier pour les problèmes dégénérés pour lesquels la LICQ n'est pas respectée à la solution ou lors de la minimisation. Finalement, l'écosystème de développement d'algorithmes d'optimisation en Python, baptisé NLP.py, est exposé. Cet environnement s'adresse aussi bien aux chercheurs en optimisation qu'aux étudiants désireux de découvrir ou d'approfondir l'optimisation. NLP.py donne accès à un ensemble de blocs constituant les éléments les plus importants des méthodes d'optimisation continue. Grâce à ceux-ci, le chercheur est en mesure d'implémenter son algorithme en se concentrant sur la logique de celui-ci plutôt que sur les subtilités techniques de son implémentation.----------ABSTRACT : This thesis focuses on the mathematical formulation, analysis and implementation of two factorization-free methods for nonlinear constrained optimization. In large-scale optimization, the Jacobian of the constraints may not be available in matrix form; only its action and that of its transpose on a vector are. Factorization-free optimization employs abstract linear operators representing the Jacobian or Hessian matrices. Therefore, only operator-vector products are allowed and direct linear algebra is replaced by iterative methods. Besides these implementation restrictions, a difficulty inherent to methods without factorization in optimization algorithms is the control of the inaccuracy in linear system solves. Indeed, we have to guarantee that the direction calculated is sufficiently accurate to ensure convergence. We first describe a factorization-free implementation of a classical augmented Lagrangian method that may use quasi-Newton second derivatives approximations. This method is applied to problems with thousands of variables and constraints coming from aircraft structural design optimization, for which methods based on factorizations fail. Results show that it is a viable approach for these problems. In order to obtain a method with a faster convergence rate, we present an algorithm that uses a proximal augmented Lagrangian as merit function and that asymptotically turns in a stabilized sequential quadratic programming method. The use of limited-memory BFGS approximations of the Hessian of the Lagrangian combined with regularization of the constraints leads to symmetric quasi-definite linear systems. Because such systems may be interpreted as the KKT conditions of linear least-squares problems, they can be efficiently solved using an appropriate Krylov method. Inaccuracy of their solutions is controlled by a stopping criterion which is easy to implement. Numerical tests demonstrate the effectiveness and robustness of our method, which compares very favorably with IPOPT, especially for degenerate problems for which LICQ is not satisfied at the optimal solution or during the minimization process. Finally, an ecosystem for optimization algorithm development in Python, code-named NLP.py, is exposed. This environment is aimed at researchers in optimization and students eager to discover or strengthen their knowledge in optimization. NLP.py provides access to a set of building blocks constituting the most important elements of continuous optimization methods. With these blocks, users are able to implement their own algorithm focusing on the logic of the algorithm rather than on the technicalities of its implementation

    Truncated Nonsmooth Newton Multigrid for phase-field brittle-fracture problems

    Get PDF
    We propose the Truncated Nonsmooth Newton Multigrid Method (TNNMG) as a solver for the spatial problems of the small-strain brittle-fracture phase-field equations. TNNMG is a nonsmooth multigrid method that can solve biconvex, block-separably nonsmooth minimization problems in roughly the time of solving one linear system of equations. It exploits the variational structure inherent in the problem, and handles the pointwise irreversibility constraint on the damage variable directly, without penalization or the introduction of a local history field. Memory consumption is significantly lower compared to approaches based on direct solvers. In the paper we introduce the method and show how it can be applied to several established models of phase-field brittle fracture. We then prove convergence of the solver to a solution of the nonsmooth Euler-Lagrange equations of the spatial problem for any load and initial iterate. Numerical comparisons to an operator-splitting algorithm show a speed increase of more than one order of magnitude, without loss of robustness

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    A Simple and Efficient Algorithm for Nonlinear Model Predictive Control

    Full text link
    We present PANOC, a new algorithm for solving optimal control problems arising in nonlinear model predictive control (NMPC). A usual approach to this type of problems is sequential quadratic programming (SQP), which requires the solution of a quadratic program at every iteration and, consequently, inner iterative procedures. As a result, when the problem is ill-conditioned or the prediction horizon is large, each outer iteration becomes computationally very expensive. We propose a line-search algorithm that combines forward-backward iterations (FB) and Newton-type steps over the recently introduced forward-backward envelope (FBE), a continuous, real-valued, exact merit function for the original problem. The curvature information of Newton-type methods enables asymptotic superlinear rates under mild assumptions at the limit point, and the proposed algorithm is based on very simple operations: access to first-order information of the cost and dynamics and low-cost direct linear algebra. No inner iterative procedure nor Hessian evaluation is required, making our approach computationally simpler than SQP methods. The low-memory requirements and simple implementation make our method particularly suited for embedded NMPC applications
    • …
    corecore