69,649 research outputs found

    Combining global optimization and boundary integral methods to robustly estimate subsurface velocity models

    Get PDF
    In this paper, we combine a fast wave equation solver using boundary integral methods with a global optimization method, namely Particle Swarm Optimization (PSO), to estimate an initial velocity model. Unlike finite difference methods that discretize the model space into pixels or voxels, our forward solver achieves significant computational savings by constraining the model space to a layered model with perturbations. The speed and reduced model space of the forward solve allows us to use global optimization methods that typically require numerous evaluations and few unknown variables. Our technique does not require an initial guess of a velocity model and is robust to local minima, unlike gradient descent frequently used in methods for both initial velocity model estimation and full waveform inversion. We apply our inversion algorithm to several synthetic data sets and demonstrate how prior information can be used to greatly improve the inversion

    Sparsity-Cognizant Total Least-Squares for Perturbed Compressive Sampling

    Full text link
    Solving linear regression problems based on the total least-squares (TLS) criterion has well-documented merits in various applications, where perturbations appear both in the data vector as well as in the regression matrix. However, existing TLS approaches do not account for sparsity possibly present in the unknown vector of regression coefficients. On the other hand, sparsity is the key attribute exploited by modern compressive sampling and variable selection approaches to linear regression, which include noise in the data, but do not account for perturbations in the regression matrix. The present paper fills this gap by formulating and solving TLS optimization problems under sparsity constraints. Near-optimum and reduced-complexity suboptimum sparse (S-) TLS algorithms are developed to address the perturbed compressive sampling (and the related dictionary learning) challenge, when there is a mismatch between the true and adopted bases over which the unknown vector is sparse. The novel S-TLS schemes also allow for perturbations in the regression matrix of the least-absolute selection and shrinkage selection operator (Lasso), and endow TLS approaches with ability to cope with sparse, under-determined "errors-in-variables" models. Interesting generalizations can further exploit prior knowledge on the perturbations to obtain novel weighted and structured S-TLS solvers. Analysis and simulations demonstrate the practical impact of S-TLS in calibrating the mismatch effects of contemporary grid-based approaches to cognitive radio sensing, and robust direction-of-arrival estimation using antenna arrays.Comment: 30 pages, 10 figures, submitted to IEEE Transactions on Signal Processin

    A Kernel Perspective for Regularizing Deep Neural Networks

    Get PDF
    We propose a new point of view for regularizing deep neural networks by using the norm of a reproducing kernel Hilbert space (RKHS). Even though this norm cannot be computed, it admits upper and lower approximations leading to various practical strategies. Specifically, this perspective (i) provides a common umbrella for many existing regularization principles, including spectral norm and gradient penalties, or adversarial training, (ii) leads to new effective regularization penalties, and (iii) suggests hybrid strategies combining lower and upper bounds to get better approximations of the RKHS norm. We experimentally show this approach to be effective when learning on small datasets, or to obtain adversarially robust models.Comment: ICM

    Global stabilization of feedforward systems under perturbations in sampling schedule

    Full text link
    For nonlinear systems that are known to be globally asymptotically stabilizable, control over networks introduces a major challenge because of the asynchrony in the transmission schedule. Maintaining global asymptotic stabilization in sampled-data implementations with zero-order hold and with perturbations in the sampling schedule is not achievable in general but we show in this paper that it is achievable for the class of feedforward systems. We develop sampled-data feedback stabilizers which are not approximations of continuous-time designs but are discontinuous feedback laws that are specifically developed for maintaining global asymptotic stabilizability under any sequence of sampling periods that is uniformly bounded by a certain "maximum allowable sampling period".Comment: 27 pages, 5 figures, submitted for possible publication to SIAM Journal Control and Optimization. Second version with added remark
    corecore