14 research outputs found

    Approximation of Fractional Harmonic Maps

    Get PDF
    The article of record as published may be located at https://doi.org/10.48550/arXiv.2104.10049Funded by Naval Postgraduate SchoolThis paper addresses the approximation of fractional harmonic maps. Besides a unit-length constraint, one has to tackle the difficulty of nonlocality. We establish weak compactness results for critical points of the fractional Dirichlet energy on unit-length vector fields. We devise and analyze numerical methods for the approximation of various partial differential equations related to fractional harmonic maps. The compactness results imply the convergence of numerical approximations. Numerical examples on spin chain dynamics and point defects are presented to demonstrate the effectiveness of the proposed methods.HA is partially supported by NSF grants DMS-1818772 and DMS-1913004, the Air Force Office of Scientific Research under Award NO: FA9550-19-1-0036, and the Department of the Navy, Naval Postgraduate School under Award NO: N00244-20-1-0005. SB acknowledges support by the DFG via the Research Unit FOR 3013 Vector- and tensor-valued surface PDEs. AS is supported by NSF Career DMS-2044898 and Simons foundation grant no 579261.HA is partially supported by NSF grants DMS-1818772 and DMS-1913004, the Air Force Office of Scientific Research under Award NO: FA9550-19-1-0036, and the Department of the Navy, Naval Postgraduate School under Award NO: N00244-20-1-0005. SB acknowledges support by the DFG via the Research Unit FOR 3013 Vector- and tensor-valued surface PDEs. AS is supported by NSF Career DMS-2044898 and Simons foundation grant no 579261

    Bilevel optimization, deep learning and fractional Laplacian regularization with applications in tomography

    Get PDF
    The article of record as published may be located at https://doi.org/10.1088/1361-6420/ab80d7Funded by Naval Postgraduate SchoolIn this work we consider a generalized bilevel optimization framework for solv- ing inverse problems. We introduce fractional Laplacian as a regularizer to improve the reconstruction quality, and compare it with the total variation regularization. We emphasize that the key advantage of using fractional Laplacian as a regularizer is that it leads to a linear operator, as opposed to the total varia- tion regularization which results in a nonlinear degenerate operator. Inspired by residual neural networks, to learn the optimal strength of regularization and the exponent of fractional Laplacian, we develop a dedicated bilevel opti- mization neural network with a variable depth for a general regularized inverse problem. We illustrate how to incorporate various regularizer choices into our proposed network. As an example, we consider tomographic reconstruction as a model problem and show an improvement in reconstruction quality, especially for limited data, via fractional Laplacian regularization. We successfully learn the regularization strength and the fractional exponent via our proposed bilevel optimization neural network. We observe that the fractional Laplacian regular- ization outperforms total variation regularization. This is specially encouraging, and important, in the case of limited and noisy data.The first and third authors are partially supported by NSF grants DMS-1818772, DMS-1913004, the Air Force Office of Scientific Research under Award No.: FA9550-19-1-0036, and the Department of Navy, Naval PostGraduate School under Award No.: N00244-20-1-0005. The third author is also partially supported by a Provost award at George Mason University under the Industrial Immersion Program. The second author is partially supported by DOE Office of Science under Contract No. DE-AC02-06CH11357.The first and third authors are partially supported by NSF grants DMS-1818772, DMS-1913004, the Air Force Office of Scientific Research under Award No.: FA9550-19-1-0036, and the Department of Navy, Naval PostGraduate School under Award No.: N00244-20-1-0005. The third author is also partially supported by a Provost award at George Mason University under the Industrial Immersion Program. The second author is partially supported by DOE Office of Science under Contract No. DE-AC02-06CH11357

    Fractional Deep Neural Network via Constrained Optimization

    Get PDF
    This paper introduces a novel algorithmic framework for a deep neural network (DNN), which in a mathematically rigorous manner, allows us to incorporate history (or memory) into the network -- it ensures all layers are connected to one another. This DNN, called Fractional-DNN, can be viewed as a time-discretization of a fractional in time nonlinear ordinary differential equation (ODE). The learning problem then is a minimization problem subject to that fractional ODE as constraints. We emphasize that an analogy between the existing DNN and ODEs, with standard time derivative, is well-known by now. The focus of our work is the Fractional-DNN. Using the Lagrangian approach, we provide a derivation of the backward propagation and the design equations. We test our network on several datasets for classification problems. Fractional-DNN offers various advantages over the existing DNN. The key benefits are a significant improvement to the vanishing gradient issue due to the memory effect, and better handling of nonsmooth data due to the network's ability to approximate non-smooth functions

    Oral: Vaginal Misoprostol: Which route for induction of term labor?

    No full text
    Aim: To compare the effectiveness of oral misoprostol with vaginal misoprostol for induction of labour at or more than 40 weeks of pregnancy
    corecore