125 research outputs found

    Simultaneous Source for non-uniform data variance and missing data

    Full text link
    The use of simultaneous sources in geophysical inverse problems has revolutionized the ability to deal with large scale data sets that are obtained from multiple source experiments. However, the technique breaks when the data has non-uniform standard deviation or when some data are missing. In this paper we develop, study, and compare a number of techniques that enable to utilize advantages of the simultaneous source framework for these cases. We show that the inverse problem can still be solved efficiently by using these new techniques. We demonstrate our new approaches on the Direct Current Resistivity inverse problem.Comment: 16 page

    Stable Architectures for Deep Neural Networks

    Full text link
    Deep neural networks have become invaluable tools for supervised machine learning, e.g., classification of text or images. While often offering superior results over traditional techniques and successfully expressing complicated patterns in data, deep architectures are known to be challenging to design and train such that they generalize well to new data. Important issues with deep architectures are numerical instabilities in derivative-based learning algorithms commonly called exploding or vanishing gradients. In this paper we propose new forward propagation techniques inspired by systems of Ordinary Differential Equations (ODE) that overcome this challenge and lead to well-posed learning problems for arbitrarily deep networks. The backbone of our approach is our interpretation of deep learning as a parameter estimation problem of nonlinear dynamical systems. Given this formulation, we analyze stability and well-posedness of deep learning and use this new understanding to develop new network architectures. We relate the exploding and vanishing gradient phenomenon to the stability of the discrete ODE and present several strategies for stabilizing deep learning for very deep networks. While our new architectures restrict the solution space, several numerical experiments show their competitiveness with state-of-the-art networks.Comment: 23 pages, 7 figure

    Full waveform inversion guided by travel time tomography

    Full text link
    Full waveform inversion (FWI) is a process in which seismic numerical simulations are fit to observed data by changing the wave velocity model of the medium under investigation. The problem is non-linear, and therefore optimization techniques have been used to find a reasonable solution to the problem. The main problem in fitting the data is the lack of low spatial frequencies. This deficiency often leads to a local minimum and to non-plausible solutions. In this work we explore how to obtain low frequency information for FWI. Our approach involves augmenting FWI with travel time tomography, which has low-frequency features. By jointly inverting these two problems we enrich FWI with information that can replace low frequency data. In addition, we use high order regularization, in a preliminary inversion stage, to prevent high frequency features from polluting our model in the initial stages of the reconstruction. This regularization also promotes the non-dominant low-frequency modes that exist in the FWI sensitivity. By applying a joint FWI and travel time inversion we are able to obtain a smooth model than can later be used to recover a good approximation for the true model. A second contribution of this paper involves the acceleration of the main computational bottleneck in FWI--the solution of the Helmholtz equation. We show that the solution time can be reduced by solving the equation for multiple right hand sides using block multigrid preconditioned Krylov methods

    A fast marching algorithm for the factored eikonal equation

    Full text link
    The eikonal equation is instrumental in many applications in several fields ranging from computer vision to geoscience. This equation can be efficiently solved using the iterative Fast Sweeping (FS) methods and the direct Fast Marching (FM) methods. However, when used for a point source, the original eikonal equation is known to yield inaccurate numerical solutions, because of a singularity at the source. In this case, the factored eikonal equation is often preferred, and is known to yield a more accurate numerical solution. One application that requires the solution of the eikonal equation for point sources is travel time tomography. This inverse problem may be formulated using the eikonal equation as a forward problem. While this problem has been solved using FS in the past, the more recent choice for applying it involves FM methods because of the efficiency in which sensitivities can be obtained using them. However, while several FS methods are available for solving the factored equation, the FM method is available only for the original eikonal equation. In this paper we develop a Fast Marching algorithm for the factored eikonal equation, using both first and second order finite-difference schemes. Our algorithm follows the same lines as the original FM algorithm and requires the same computational effort. In addition, we show how to obtain sensitivities using this FM method and apply travel time tomography, formulated as an inverse factored eikonal equation. Numerical results in two and three dimensions show that our algorithm solves the factored eikonal equation efficiently, and demonstrate the achieved accuracy for computing the travel time. We also demonstrate a recovery of a 2D and 3D heterogeneous medium by travel time tomography using the eikonal equation for forward modelling and inversion by Gauss-Newton

    Fully Hyperbolic Convolutional Neural Networks

    Full text link
    Convolutional Neural Networks (CNN) have recently seen tremendous success in various computer vision tasks. However, their application to problems with high dimensional input and output, such as high-resolution image and video segmentation or 3D medical imaging, has been limited by various factors. Primarily, in the training stage, it is necessary to store network activations for back propagation. In these settings, the memory requirements associated with storing activations can exceed what is feasible with current hardware, especially for problems in 3D. Motivated by the propagation of signals over physical networks, that are governed by the hyperbolic Telegraph equation, in this work we introduce a fully conservative hyperbolic network for problems with high dimensional input and output. We introduce a coarsening operation that allows completely reversible CNNs by using a learnable Discrete Wavelet Transform and its inverse to both coarsen and interpolate the network state and change the number of channels. We show that fully reversible networks are able to achieve results comparable to the state of the art in 4D time-lapse hyper spectral image segmentation and full 3D video segmentation, with a much lower memory footprint that is a constant independent of the network depth. We also extend the use of such networks to Variational Auto Encoders with high resolution input and output.Comment: 21 pages, 9 figures, Updated work to include additional numerical experiments, a section about VAEs and learnable wavelet

    jInv -- a flexible Julia package for PDE parameter estimation

    Full text link
    Estimating parameters of Partial Differential Equations (PDEs) from noisy and indirect measurements often requires solving ill-posed inverse problems. These so called parameter estimation or inverse medium problems arise in a variety of applications such as geophysical, medical imaging, and nondestructive testing. Their solution is computationally intense since the underlying PDEs need to be solved numerous times until the reconstruction of the parameters is sufficiently accurate. Typically, the computational demand grows significantly when more measurements are available, which poses severe challenges to inversion algorithms as measurement devices become more powerful. In this paper we present jInv, a flexible framework and open source software that provides parallel algorithms for solving parameter estimation problems with many measurements. Being written in the expressive programming language Julia, jInv is portable, easy to understand and extend, cross-platform tested, and well-documented. It provides novel parallelization schemes that exploit the inherent structure of many parameter estimation problems and can be used to solve multiphysics inversion problems as is demonstrated using numerical experiments motivated by geophysical imaging

    A numerical method for efficient 3D inversions using Richards equation

    Full text link
    Fluid flow in the vadose zone is governed by Richards equation; it is parameterized by hydraulic conductivity, which is a nonlinear function of pressure head. Investigations in the vadose zone typically require characterizing distributed hydraulic properties. Saturation or pressure head data may include direct measurements made from boreholes. Increasingly, proxy measurements from hydrogeophysics are being used to supply more spatially and temporally dense data sets. Inferring hydraulic parameters from such datasets requires the ability to efficiently solve and deterministically optimize the nonlinear time domain Richards equation. This is particularly important as the number of parameters to be estimated in a vadose zone inversion continues to grow. In this paper, we describe an efficient technique to invert for distributed hydraulic properties in 1D, 2D, and 3D. Our algorithm does not store the Jacobian, but rather computes the product with a vector, which allows the size of the inversion problem to become much larger than methods such as finite difference or automatic differentiation; which are constrained by computation and memory, respectively. We show our algorithm in practice for a 3D inversion of saturated hydraulic conductivity using saturation data through time. The code to run our examples is open source and the algorithm presented allows this inversion process to run on modest computational resources

    IMEXnet: A Forward Stable Deep Neural Network

    Full text link
    Deep convolutional neural networks have revolutionized many machine learning and computer vision tasks, however, some remaining key challenges limit their wider use. These challenges include improving the network's robustness to perturbations of the input image and the limited ``field of view'' of convolution operators. We introduce the IMEXnet that addresses these challenges by adapting semi-implicit methods for partial differential equations. Compared to similar explicit networks, such as residual networks, our network is more stable, which has recently shown to reduce the sensitivity to small changes in the input features and improve generalization. The addition of an implicit step connects all pixels in each channel of the image and therefore addresses the field of view problem while still being comparable to standard convolutions in terms of the number of parameters and computational complexity. We also present a new dataset for semantic segmentation and demonstrate the effectiveness of our architecture using the NYU Depth dataset

    Simultaneous shot inversion for nonuniform geometries using fast data interpolation

    Full text link
    Stochastic optimization is key to efficient inversion in PDE-constrained optimization. Using 'simultaneous shots', or random superposition of source terms, works very well in simple acquisition geometries where all sources see all receivers, but this rarely occurs in practice. We develop an approach that interpolates data to an ideal acquisition geometry while solving the inverse problem using simultaneous shots. The approach is formulated as a joint inverse problem, combining ideas from low-rank interpolation with full-waveform inversion. Results using synthetic experiments illustrate the flexibility and efficiency of the approach.Comment: 16 pages, 10 figure

    LeanResNet: A Low-cost Yet Effective Convolutional Residual Networks

    Full text link
    Convolutional Neural Networks (CNNs) filter the input data using spatial convolution operators with compact stencils. Commonly, the convolution operators couple features from all channels, which leads to immense computational cost in the training of and prediction with CNNs. To improve the efficiency of CNNs, we introduce lean convolution operators that reduce the number of parameters and computational complexity, and can be used in a wide range of existing CNNs. Here, we exemplify their use in residual networks (ResNets), which have been very reliable for a few years now and analyzed intensively. In our experiments on three image classification problems, the proposed LeanResNet yields results that are comparable to other recently proposed reduced architectures using similar number of parameters
    • …
    corecore