1,884 research outputs found

    Inverse Problems and Data Assimilation

    Full text link
    These notes are designed with the aim of providing a clear and concise introduction to the subjects of Inverse Problems and Data Assimilation, and their inter-relations, together with citations to some relevant literature in this area. The first half of the notes is dedicated to studying the Bayesian framework for inverse problems. Techniques such as importance sampling and Markov Chain Monte Carlo (MCMC) methods are introduced; these methods have the desirable property that in the limit of an infinite number of samples they reproduce the full posterior distribution. Since it is often computationally intensive to implement these methods, especially in high dimensional problems, approximate techniques such as approximating the posterior by a Dirac or a Gaussian distribution are discussed. The second half of the notes cover data assimilation. This refers to a particular class of inverse problems in which the unknown parameter is the initial condition of a dynamical system, and in the stochastic dynamics case the subsequent states of the system, and the data comprises partial and noisy observations of that (possibly stochastic) dynamical system. We will also demonstrate that methods developed in data assimilation may be employed to study generic inverse problems, by introducing an artificial time to generate a sequence of probability measures interpolating from the prior to the posterior

    A Practical Method to Estimate Information Content in the Context of 4D-Var Data Assimilation. II: Application to Global Ozone Assimilation

    Get PDF
    Data assimilation obtains improved estimates of the state of a physical system by combining imperfect model results with sparse and noisy observations of reality. Not all observations used in data assimilation are equally valuable. The ability to characterize the usefulness of different data points is important for analyzing the effectiveness of the assimilation system, for data pruning, and for the design of future sensor systems. In the companion paper (Sandu et al., 2012) we derive an ensemble-based computational procedure to estimate the information content of various observations in the context of 4D-Var. Here we apply this methodology to quantify the signal and degrees of freedom for signal information metrics of satellite observations used in a global chemical data assimilation problem with the GEOS-Chem chemical transport model. The assimilation of a subset of data points characterized by the highest information content yields an analysis comparable in quality with the one obtained using the entire data set

    Estimating model evidence using data assimilation

    Get PDF
    We review the field of data assimilation (DA) from a Bayesian perspective and show that, in addition to its by now common application to state estimation, DA may be used for model selection. An important special case of the latter is the discrimination between a factual model–which corresponds, to the best of the modeller's knowledge, to the situation in the actual world in which a sequence of events has occurred–and a counterfactual model, in which a particular forcing or process might be absent or just quantitatively different from the actual world. Three different ensemble‐DA methods are reviewed for this purpose: the ensemble Kalman filter (EnKF), the ensemble four‐dimensional variational smoother (En‐4D‐Var), and the iterative ensemble Kalman smoother (IEnKS). An original contextual formulation of model evidence (CME) is introduced. It is shown how to apply these three methods to compute CME, using the approximated time‐dependent probability distribution functions (pdfs) each of them provide in the process of state estimation. The theoretical formulae so derived are applied to two simplified nonlinear and chaotic models: (i) the Lorenz three‐variable convection model (L63), and (ii) the Lorenz 40‐variable midlatitude atmospheric dynamics model (L95). The numerical results of these three DA‐based methods and those of an integration based on importance sampling are compared. It is found that better CME estimates are obtained by using DA, and the IEnKS method appears to be best among the DA methods. Differences among the performance of the three DA‐based methods are discussed as a function of model properties. Finally, the methodology is implemented for parameter estimation and for event attribution

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Data analysis and data assimilation of Arctic Ocean observations

    Get PDF
    Thesis (Ph.D.) University of Alaska Fairbanks, 2019Arctic-region observations are sparse and represent only a small portion of the physical state of nature. It is therefore essential to maximize the information content of observations and bservation-conditioned analyses whenever possible, including the quantification of their accuracy. The four largely disparate works presented here emphasize observation analysis and assimilation in the context of the Arctic Ocean (AO). These studies focus on the relationship between observational data/products, numerical models based on physical processes, and the use of such data to constrain and inform those products/models to di_erent ends. The first part comprises Chapters 1 and 2 which revolve around oceanographic observations collected during the International Polar Year (IPY) program of 2007-2009. Chapter 1 validates pan- Arctic satellite-based sea surface temperature and salinity products against these data to establish important estimates of product reliability in terms of bias and bias-adjusted standard errors. It establishes practical regional reliability for these products which are often used in modeling and climatological applications, and provides some guidance for improving them. Chapter 2 constructs a gridded full-depth snapshot of the AO during the IPY to visually outline recent, previouslydocumented AO watermass distribution changes by comparing it to a historical climatology of the latter 20th century derived from private Russian data. It provides an expository review of literature documenting major AO climate changes and augments them with additional changes in freshwater distribution and sea surface height in the Chukchi and Bering Seas. The last two chapters present work focused on the application of data assimilation (DA) methodologies, and constitute the second part of this thesis focused on the synthesis of numerical modeling and observational data. Chapter 3 presents a novel approach to sea ice model trajectory optimization whereby spatially-variable sea ice rheology parameter distributions provide the additional model flexibility needed to assimilate observable components of the sea ice state. The study employs a toy 1D model to demonstrate the practical benefits of the approach and serves as a proof-of-concept to justify the considerable effort needed to extend the approach to 2D. Chapter 4 combines an ice-free model of the Chukchi Sea with a modified ensemble filter to develop a DA system which would be suitable for operational forecasting and monitoring the region in support of oil spill mitigation. The method improves the assimilation of non-Gaussian asynchronous surface current observations beyond the traditional approach.Chapter 1: Sea-surface temperature and salinity product comparison against external in situ data in the Arctic Ocean -- Chapter 2: Changes in Arctic Ocean climate evinced through analysis of IPY 2007-2008 oceanographic observations -- Chapter 3: Toward optimization of rheology in sea ice models through data assimilation -- Chapter 4: Ensemble-transform filtering of HFR & ADCP velocities in the Chukchi Sea -- General conclusion

    Interpretable Transformations with Encoder-Decoder Networks

    Full text link
    Deep feature spaces have the capacity to encode complex transformations of their input data. However, understanding the relative feature-space relationship between two transformed encoded images is difficult. For instance, what is the relative feature space relationship between two rotated images? What is decoded when we interpolate in feature space? Ideally, we want to disentangle confounding factors, such as pose, appearance, and illumination, from object identity. Disentangling these is difficult because they interact in very nonlinear ways. We propose a simple method to construct a deep feature space, with explicitly disentangled representations of several known transformations. A person or algorithm can then manipulate the disentangled representation, for example, to re-render an image with explicit control over parameterized degrees of freedom. The feature space is constructed using a transforming encoder-decoder network with a custom feature transform layer, acting on the hidden representations. We demonstrate the advantages of explicit disentangling on a variety of datasets and transformations, and as an aid for traditional tasks, such as classification.Comment: Accepted at ICCV 201

    Parameter Estimation via Conditional Expectation --- A Bayesian Inversion

    Get PDF
    When a mathematical or computational model is used to analyse some system, it is usual that some parameters resp.\ functions or fields in the model are not known, and hence uncertain. These parametric quantities are then identified by actual observations of the response of the real system. In a probabilistic setting, Bayes's theory is the proper mathematical background for this identification process. The possibility of being able to compute a conditional expectation turns out to be crucial for this purpose. We show how this theoretical background can be used in an actual numerical procedure, and shortly discuss various numerical approximations

    Sigma-point particle filter for parameter estimation in a multiplicative noise environment

    Get PDF
    A pre-requisite for the "optimal estimate'' by the ensemble-based Kalman filter (EnKF) is the Gaussian assumption for background and observation errors, which is often violated when the errors are multiplicative, even for a linear system. This study first explores the challenge of the multiplicative noise to the current EnKF schemes. Then, a Sigma Point Kalman Filter based Particle Filter (SPPF) is presented as an alternative to solve the issues associated with multiplicative noise. The classic Lorenz '63 model and a higher dimensional Lorenz '96 model are used as test beds for the data assimilation experiments. Performance of the SPPF algorithm is compared against a standard EnKF as well as an advanced square-root Sigma-Point Kalman Filters (SPKF). The results show that the SPPF outperforms the EnKF and the square-root SPKF in the presence of multiplicative noise. The super ensemble structure of the SPPF makes it computationally attractive compared to the standard Particle Filter (PF)
    • 

    corecore