22 research outputs found

    MR image reconstruction using deep density priors

    Full text link
    Algorithms for Magnetic Resonance (MR) image reconstruction from undersampled measurements exploit prior information to compensate for missing k-space data. Deep learning (DL) provides a powerful framework for extracting such information from existing image datasets, through learning, and then using it for reconstruction. Leveraging this, recent methods employed DL to learn mappings from undersampled to fully sampled images using paired datasets, including undersampled and corresponding fully sampled images, integrating prior knowledge implicitly. In this article, we propose an alternative approach that learns the probability distribution of fully sampled MR images using unsupervised DL, specifically Variational Autoencoders (VAE), and use this as an explicit prior term in reconstruction, completely decoupling the encoding operation from the prior. The resulting reconstruction algorithm enjoys a powerful image prior to compensate for missing k-space data without requiring paired datasets for training nor being prone to associated sensitivities, such as deviations in undersampling patterns used in training and test time or coil settings. We evaluated the proposed method with T1 weighted images from a publicly available dataset, multi-coil complex images acquired from healthy volunteers (N=8) and images with white matter lesions. The proposed algorithm, using the VAE prior, produced visually high quality reconstructions and achieved low RMSE values, outperforming most of the alternative methods on the same dataset. On multi-coil complex data, the algorithm yielded accurate magnitude and phase reconstruction results. In the experiments on images with white matter lesions, the method faithfully reconstructed the lesions. Keywords: Reconstruction, MRI, prior probability, machine learning, deep learning, unsupervised learning, density estimationComment: Published in IEEE TMI. Main text and supplementary material, 19 pages tota

    Proxomal point algorithm in mathematical programming

    Get PDF
    Issued as Progress report, and Final report, Project no. G-37-61

    Dual coordinate ascent for problems with strictly convex costs and linear constraints : a unified approach

    Get PDF
    Caption title. "July 1988."Includes bibliographical references.Work supported by the National Science Foundation under grant NSF-ECS-8519058 Work supported by the Army Research Office under grant DAAL03-86-K-0171by Paul Tseng

    The Sixth Copper Mountain Conference on Multigrid Methods, part 2

    Get PDF
    The Sixth Copper Mountain Conference on Multigrid Methods was held on April 4-9, 1993, at Copper Mountain, Colorado. This book is a collection of many of the papers presented at the conference and so represents the conference proceedings. NASA Langley graciously provided printing of this document so that all of the papers could be presented in a single forum. Each paper was reviewed by a member of the conference organizing committee under the coordination of the editors. The multigrid discipline continues to expand and mature, as is evident from these proceedings. The vibrancy in this field is amply expressed in these important papers, and the collection clearly shows its rapid trend to further diversity and depth

    A Multigrid Method for the Efficient Numerical Solution of Optimization Problems Constrained by Partial Differential Equations

    Get PDF
    We study the minimization of a quadratic functional subject to constraints given by a linear or semilinear elliptic partial differential equation with distributed control. Further, pointwise inequality constraints on the control are accounted for. In the linear-quadratic case, the discretized optimality conditions yield a large, sparse, and indefinite system with saddle point structure. One main contribution of this thesis consists in devising a coupled multigrid solver which avoids full constraint elimination. To this end, we define a smoothing iteration incorporating elements from constraint preconditioning. A local mode analysis shows that for discrete optimality systems, we can expect smoothing rates close to those obtained with respect to the underlying constraint PDE. Our numerical experiments include problems with constraints where standard pointwise smoothing is known to fail for the underlying PDE. In particular, we consider anisotropic diffusion and convection-diffusion problems. The framework of our method allows to include line smoothers or ILU-factorizations, which are suitable for such problems. In all cases, numerical experiments show that convergence rates do not depend on the mesh size of the finest level and discrete optimality systems can be solved with a small multiple of the computational cost which is required to solve the underlying constraint PDE. Employing the full multigrid approach, the computational cost is proportional to the number of unknowns on the finest grid level. We discuss the role of the regularization parameter in the cost functional and show that the convergence rates are robust with respect to both the fine grid mesh size and the regularization parameter under a mild restriction on the next to coarsest mesh size. Incorporating spectral filtering for the reduced Hessian in the control smoothing step allows us to weaken the mesh size restriction. As a result, problems with near-vanishing regularization parameter can be treated efficiently with a negligible amount of additional computational work. For fine discretizations, robust convergence is obtained with rates which are independent of the regularization parameter, the coarsest mesh size, and the number of levels. In order to treat linear-quadratic problems with pointwise inequality constraints on the control, the multigrid approach is modified to solve subproblems generated by a primal-dual active set strategy (PDAS). Numerical experiments demonstrate the high efficiency of this approach due to mesh-independent convergence of both the outer PDAS method and the inner multigrid solver. The PDAS-multigrid method is incorporated in the sequential quadratic programming (SQP) framework. Inexact Newton techniques further enhance the computational efficiency. Globalization is implemented with a line search based on the augmented Lagrangian merit function. Numerical experiments highlight the efficiency of the resulting SQP-multigrid approach. In all cases, locally superlinear convergence of the SQP method is observed. In combination with the mesh-independent convergence rate of the inner solver, a solution method with optimal efficiency is obtained

    Doctor of Philosophy

    Get PDF
    dissertationX-ray computed tomography (CT) is a widely popular medical imaging technique that allows for viewing of in vivo anatomy and physiology. In order to produce high-quality images and provide reliable treatment, CT imaging requires the precise knowledge of t

    A direct method for the numerical solution of optimization problems with time-periodic PDE constraints

    Get PDF
    In der vorliegenden Dissertation entwickeln wir auf der Basis der Direkten Mehrzielmethode eine neue numerische Methode für Optimalsteuerungsprobleme (OCPs) mit zeitperiodischen partiellen Differentialgleichungen (PDEs). Die vorgeschlagene Methode zeichnet sich durch asymptotisch optimale Skalierung des numerischen Aufwandes in der Zahl der örtlichen Diskretisierungspunkte aus. Sie besteht aus einem Linearen Iterativen Splitting Ansatz (LISA) innerhalb einer Newton-Typ Iteration zusammen mit einer Globalisierungsstrategie, die auf natürlichen Niveaufunktionen basiert. Wir untersuchen die LISA-Newton Methode im Rahmen von Bocks kappa-Theorie und entwickeln zuverlässige a-posteriori kappa-Schätzer. Im Folgenden erweitern wir die LISA-Newton Methode auf den Fall von inexakter Sequentieller Quadratischer Programmierung (SQP) für ungleichungsbeschränke Probleme und untersuchen das lokale Konvergenzverhalten. Zusätzlich entwickeln wir klassische und Zweigitter Newton-Picard Vorkonditionierer für LISA und beweisen gitterunabhängige Konvergenz der klassischen Variante auf einem Modellproblem. Anhand numerischer Ergebnisse können wir belegen, dass im Vergleich zur klassichen Variante die Zweigittervariante sogar noch effizienter ist für typische Anwendungsprobleme. Des Weiteren entwickeln wir eine Zweigitterapproximation der Lagrange-Hessematrix, welche gut in den Rahmen des Zweigitter Newton-Picard Ansatzes passt und die im Vergleich zur exakten Hessematrix zu einer Laufzeitreduktion von 68% auf einem nichtlinearen Benchmarkproblem führt. Wir zeigen weiterhin, dass die Qualität des Feingitters die Genauigkeit der Lösung bestimmt, während die Qualität des Grobgitters die asymptotische lineare Konvergenzrate, d.h., das Bocksche kappa, festlegt. Zuverlässige kappa-Schätzer ermöglichen die automatische Steuerung der Grobgitterverfeinerung für schnelle Konvergenz. Für die Lösung der auftretenden, großen Probleme der Quadratischen Programmierung (QPs) wählen wir einen strukturausnutzenden zweistufigen Ansatz. In der ersten Stufe nutzen wir die durch den Mehrzielansatz und die Newton-Picard Vorkonditionierer bedingten Strukturen aus, um die großen QPs auf äquivalente QPs zu reduzieren, deren Größe von der Zahl der örtlichen Diskretisierungspunkte unabhängig ist. Für die zweite Stufe entwickeln wir Erweiterungen für eine Parametrische Aktive Mengen Methode (PASM), die zu einem zuverlässigen und effizienten Löser für die resultierenden, möglicherweise nichtkonvexen QPs führen. Weiterhin konstruieren wir drei anschauliche, contra-intuitive Probleme, die aufzeigen, dass die Konvergenz einer one-shot one-step Optimierungsmethode weder notwendig noch hinreichend für die Konvergenz der entsprechenden Methode für das Vorwärtsproblem ist. Unsere Analyse von drei Regularisierungsansätzen zeigt, dass de-facto Verlust von Konvergenz selbst mit diesen Ansätzen nicht verhindert werden kann. Des Weiteren haben wir die vorgestellten Methoden in einem Computercode mit Namen MUSCOP implementiert, der automatische Ableitungserzeugung erster und zweiter Ordnung von Modellfunktionen und Lösungen der dynamischen Systeme, Parallelisierung auf der Mehrzielstruktur und ein Hybrid Language Programming Paradigma zur Verfügung stellt, um die benötigte Zeit für das Aufstellen und Lösen neuer Anwendungsprobleme zu minimieren. Wir demonstrieren die Anwendbarkeit, Zuverlässigkeit und Effektivität von MUSCOP und damit der vorgeschlagenen numerischen Methoden anhand einer Reihe von PDE OCPs von steigender Schwierigkeit, angefangen bei linearen akademischen Problemen über hochgradig nichtlineare akademische Probleme der mathematischen Biologie bis hin zu einem hochgradig nichtlinearen Anwendungsproblem der chemischen Verfahrenstechnik im Bereich der präparativen Chromatographie auf Basis realer Daten: Dem Simulated Moving Bed (SMB) Prozess

    From convex feasibility to convex constrained optimization using block action projection methods and underrelaxation

    No full text
    Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)We describe the evolution of projection methods for solving convex feasibility problems to optimization methods when inconsistency arises, finally deriving from them, in a natural way, a general block method for convex constrained optimization. We present convergence results.164495504Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)CNPq [476825/2004-0, 304820/2006-7

    Towards Reduced-order Model Accelerated Optimization for Aerodynamic Design

    Get PDF
    The adoption of mathematically formal simulation-based optimization approaches within aerodynamic design depends upon a delicate balance of affordability and accessibility. Techniques are needed to accelerate the simulation-based optimization process, but they must remain approachable enough for the implementation time to not eliminate the cost savings or act as a barrier to adoption. This dissertation introduces a reduced-order model technique for accelerating fixed-point iterative solvers (e.g. such as those employed to solve primal equations, sensitivity equations, design equations, and their combination). The reduced-order model-based acceleration technique collects snapshots of early iteration (pre-convergent) solutions and residuals and then uses them to project to significantly more accurate solutions, i.e. smaller residual. The technique can be combined with other convergence schemes like multigrid and adaptive timestepping. The technique is generalizable and in this work is demonstrated to accelerate steady and unsteady flow solutions; continuous and discrete adjoint sensitivity solutions; and one-shot design optimization solutions. This final application, reduced-order model accelerated one-shot optimization approach, in particular represents a step towards more efficient aerodynamic design optimization. Through this series of applications, different basis vectors were considered and best practices for snapshot collection procedures were outlined. The major outcome of this dissertation is the development and demonstration of this reduced-order model acceleration technique. This work includes the first application of the reduced-order model-based acceleration method to an explicit one-shot iterative optimization process
    corecore