1,022 research outputs found
Efficient Localization of Discontinuities in Complex Computational Simulations
Surrogate models for computational simulations are input-output
approximations that allow computationally intensive analyses, such as
uncertainty propagation and inference, to be performed efficiently. When a
simulation output does not depend smoothly on its inputs, the error and
convergence rate of many approximation methods deteriorate substantially. This
paper details a method for efficiently localizing discontinuities in the input
parameter domain, so that the model output can be approximated as a piecewise
smooth function. The approach comprises an initialization phase, which uses
polynomial annihilation to assign function values to different regions and thus
seed an automated labeling procedure, followed by a refinement phase that
adaptively updates a kernel support vector machine representation of the
separating surface via active learning. The overall approach avoids structured
grids and exploits any available simplicity in the geometry of the separating
surface, thus reducing the number of model evaluations required to localize the
discontinuity. The method is illustrated on examples of up to eleven
dimensions, including algebraic models and ODE/PDE systems, and demonstrates
improved scaling and efficiency over other discontinuity localization
approaches
A continuous analogue of the tensor-train decomposition
We develop new approximation algorithms and data structures for representing
and computing with multivariate functions using the functional tensor-train
(FT), a continuous extension of the tensor-train (TT) decomposition. The FT
represents functions using a tensor-train ansatz by replacing the
three-dimensional TT cores with univariate matrix-valued functions. The main
contribution of this paper is a framework to compute the FT that employs
adaptive approximations of univariate fibers, and that is not tied to any
tensorized discretization. The algorithm can be coupled with any univariate
linear or nonlinear approximation procedure. We demonstrate that this approach
can generate multivariate function approximations that are several orders of
magnitude more accurate, for the same cost, than those based on the
conventional approach of compressing the coefficient tensor of a tensor-product
basis. Our approach is in the spirit of other continuous computation packages
such as Chebfun, and yields an algorithm which requires the computation of
"continuous" matrix factorizations such as the LU and QR decompositions of
vector-valued functions. To support these developments, we describe continuous
versions of an approximate maximum-volume cross approximation algorithm and of
a rounding algorithm that re-approximates an FT by one of lower ranks. We
demonstrate that our technique improves accuracy and robustness, compared to TT
and quantics-TT approaches with fixed parameterizations, of high-dimensional
integration, differentiation, and approximation of functions with local
features such as discontinuities and other nonlinearities
Bayesian inference with optimal maps
We present a new approach to Bayesian inference that entirely avoids Markov chain simulation, by constructing a map that pushes forward the prior measure to the posterior measure. Existence and uniqueness of a suitable measure-preserving map is established by formulating the problem in the context of optimal transport theory. We discuss various means of explicitly parameterizing the map and computing it efficiently through solution of an optimization problem, exploiting gradient information from the forward model when possible. The resulting algorithm overcomes many of the computational bottlenecks associated with Markov chain Monte Carlo. Advantages of a map-based representation of the posterior include analytical expressions for posterior moments and the ability to generate arbitrary numbers of independent posterior samples without additional likelihood evaluations or forward solves. The optimization approach also provides clear convergence criteria for posterior approximation and facilitates model selection through automatic evaluation of the marginal likelihood. We demonstrate the accuracy and efficiency of the approach on nonlinear inverse problems of varying dimension, involving the inference of parameters appearing in ordinary and partial differential equations.United States. Dept. of Energy. Office of Advanced Scientific Computing Research (Grant DE-SC0002517)United States. Dept. of Energy. Office of Advanced Scientific Computing Research (Grant DE-SC0003908
Inhibitory Activity of Leaves Extracts of Citrullus colocynthis Schrad. on HT29 Human Colon Cancer Cells
Aims: Citrullus colocynthis is a plant endemic in Asia, Africa and in the Mediterranean basin. It is
used in folk medicine against infections, inflammations and cardiovascular and immune-related
diseases. There are further evidences of the use of Citrullus colocynthis Schrad in the treatment of
cancer in traditional practices. The present study aimed to determine the potential antiproliferative
effects of different Citrullus colocynthis leaf extracts on human cancer cells.
Methodology: Antiproliferative and antioxidant effects on HT-29 human colon cancer cells were
detected by MTS assay and a modified protocol of the alkaline Comet assay. In vitro antioxidant
activities of different leaf extracts were evaluated through DPPH, \u3b2-carotene/linoleic acid and
reducing power assays.
Results: The leaf chloroform extract exhibited the higher cell growth inhibitory activity without
induction of DNA damage; it showed to be able to significantly decrease DNA damage induced by
H2O2 (100 M). This antioxidant activity seems to be comparable to that of vitamin C (1 mM). Ethyl
acetate, acetone and methanol leaf extracts showed to be the most effective in reducing the stable
free DPPH radical (IC50 =113 g/ml), in transforming the Fe3+ to Fe2+ (IC50 = 134 \ub5g/ml) and in
inducing linoleic acid oxidation with an inhibition of 31.9 %.
Conclusion: Our results confirm the antiproliferative potential of Citrullus colocynthis Schrad. on
human cancer cells
Bayesian reconstruction of binary media with unresolved fine-scale spatial structures
We present a Bayesian technique to estimate the fine-scale properties of a binary medium from multiscale observations. The binary medium of interest consists of spatially varying proportions of low and high permeability material with an isotropic structure. Inclusions of one material within the other are far smaller than the domain sizes of interest, and thus are never explicitly resolved. We consider the problem of estimating the spatial distribution of the inclusion proportion, F(x), and a characteristic length-scale of the inclusions, δ, from sparse multiscale measurements. The observations consist of coarse-scale (of the order of the domain size) measurements of the effective permeability of the medium (i.e., static data) and tracer breakthrough times (i.e., dynamic data), which interrogate the fine scale, at a sparsely distributed set of locations. This ill-posed problem is regularized by specifying a Gaussian process model for the unknown field F(x) and expressing it as a superposition of Karhunen–Loève modes. The effect of the fine-scale structures on the coarse-scale effective permeability i.e., upscaling, is performed using a subgrid-model which includes δ as one of its parameters. A statistical inverse problem is posed to infer the weights of the Karhunen–Loève modes and δ, which is then solved using an adaptive Markov Chain Monte Carlo method. The solution yields non-parametric distributions for the objects of interest, thus providing most probable estimates and uncertainty bounds on latent structures at coarse and fine scales. The technique is tested using synthetic data. The individual contributions of the static and dynamic data to the inference are also analyzed.United States. Dept. of Energy. National Nuclear Security Administration (Contract DE-AC04_94AL85000
On dimension reduction in Gaussian filters
A priori dimension reduction is a widely adopted technique for reducing the
computational complexity of stationary inverse problems. In this setting, the
solution of an inverse problem is parameterized by a low-dimensional basis that
is often obtained from the truncated Karhunen-Loeve expansion of the prior
distribution. For high-dimensional inverse problems equipped with smoothing
priors, this technique can lead to drastic reductions in parameter dimension
and significant computational savings.
In this paper, we extend the concept of a priori dimension reduction to
non-stationary inverse problems, in which the goal is to sequentially infer the
state of a dynamical system. Our approach proceeds in an offline-online
fashion. We first identify a low-dimensional subspace in the state space before
solving the inverse problem (the offline phase), using either the method of
"snapshots" or regularized covariance estimation. Then this subspace is used to
reduce the computational complexity of various filtering algorithms - including
the Kalman filter, extended Kalman filter, and ensemble Kalman filter - within
a novel subspace-constrained Bayesian prediction-and-update procedure (the
online phase). We demonstrate the performance of our new dimension reduction
approach on various numerical examples. In some test cases, our approach
reduces the dimensionality of the original problem by orders of magnitude and
yields up to two orders of magnitude in computational savings
Inverse Problems in a Bayesian Setting
In a Bayesian setting, inverse problems and uncertainty quantification (UQ)
--- the propagation of uncertainty through a computational (forward) model ---
are strongly connected. In the form of conditional expectation the Bayesian
update becomes computationally attractive. We give a detailed account of this
approach via conditional approximation, various approximations, and the
construction of filters. Together with a functional or spectral approach for
the forward UQ there is no need for time-consuming and slowly convergent Monte
Carlo sampling. The developed sampling-free non-linear Bayesian update in form
of a filter is derived from the variational problem associated with conditional
expectation. This formulation in general calls for further discretisation to
make the computation possible, and we choose a polynomial approximation. After
giving details on the actual computation in the framework of functional or
spectral approximations, we demonstrate the workings of the algorithm on a
number of examples of increasing complexity. At last, we compare the linear and
nonlinear Bayesian update in form of a filter on some examples.Comment: arXiv admin note: substantial text overlap with arXiv:1312.504
Variability monitoring of the hydroxyl maser emission in G12.889+0.489
Through a series of observations with the Australia Telescope Compact Array
we have monitored the variability of ground-state hydroxyl maser emission from
G12.889+0.489 in all four Stokes polarisation products. These observations were
motivated by the known periodicity in the associated 6.7-GHz methanol maser
emission. A total of 27 epochs of observations were made over 16 months. No
emission was seen from either the 1612 or 1720 MHz satellite line transitions
(to a typical five sigma upper limit of 0.2 Jy). The peak flux densities of the
1665 and 1667 MHz emission were observed to vary at a level of ~20% (with the
exception of one epoch which dropped by <40%). There was no distinct flaring
activity at any epoch, but there was a weak indication of periodic variability,
with a period and phase of minimum emission similar to that of methanol. There
is no significant variation in the polarised properties of the hydroxyl, with
Stokes Q and U flux densities varying in accord with the Stokes I intensity
(linear polarisation, P, varying by <20%) and the right and left circularly
polarised components varying by <33% at 1665-MHz and <38% at 1667-MHz. These
observations are the first monitoring observations of the hydroxyl maser
emission from G12.889+0.489.Comment: 7 pages, 6 figures, accepted for publication in MNRA
An approximate empirical Bayesian method for large-scale linear-Gaussian inverse problems
We study Bayesian inference methods for solving linear inverse problems,
focusing on hierarchical formulations where the prior or the likelihood
function depend on unspecified hyperparameters. In practice, these
hyperparameters are often determined via an empirical Bayesian method that
maximizes the marginal likelihood function, i.e., the probability density of
the data conditional on the hyperparameters. Evaluating the marginal
likelihood, however, is computationally challenging for large-scale problems.
In this work, we present a method to approximately evaluate marginal likelihood
functions, based on a low-rank approximation of the update from the prior
covariance to the posterior covariance. We show that this approximation is
optimal in a minimax sense. Moreover, we provide an efficient algorithm to
implement the proposed method, based on a combination of the randomized SVD and
a spectral approximation method to compute square roots of the prior covariance
matrix. Several numerical examples demonstrate good performance of the proposed
method
- …
