264,346 research outputs found
A Multi-Grid Iterative Method for Photoacoustic Tomography
Inspired by the recent advances on minimizing nonsmooth or bound-constrained
convex functions on models using varying degrees of fidelity, we propose a line
search multigrid (MG) method for full-wave iterative image reconstruction in
photoacoustic tomography (PAT) in heterogeneous media. To compute the search
direction at each iteration, we decide between the gradient at the target
level, or alternatively an approximate error correction at a coarser level,
relying on some predefined criteria. To incorporate absorption and dispersion,
we derive the analytical adjoint directly from the first-order acoustic wave
system. The effectiveness of the proposed method is tested on a total-variation
penalized Iterative Shrinkage Thresholding algorithm (ISTA) and its accelerated
variant (FISTA), which have been used in many studies of image reconstruction
in PAT. The results show the great potential of the proposed method in
improving speed of iterative image reconstruction
A Second Order Godunov Method for Multidimensional Relativistic Magnetohydrodynamics
We describe a new Godunov algorithm for relativistic magnetohydrodynamics
(RMHD) that combines a simple, unsplit second order accurate integrator with
the constrained transport (CT) method for enforcing the solenoidal constraint
on the magnetic field. A variety of approximate Riemann solvers are implemented
to compute the fluxes of the conserved variables. The methods are tested with a
comprehensive suite of multidimensional problems. These tests have helped us
develop a hierarchy of correction steps that are applied when the integration
algorithm predicts unphysical states due to errors in the fluxes, or errors in
the inversion between conserved and primitive variables. Although used
exceedingly rarely, these corrections dramatically improve the stability of the
algorithm. We present preliminary results from the application of these
algorithms to two problems in RMHD: the propagation of supersonic magnetized
jets, and the amplification of magnetic field by turbulence driven by the
relativistic Kelvin-Helmholtz instability (KHI). Both of these applications
reveal important differences between the results computed with Riemann solvers
that adopt different approximations for the fluxes. For example, we show that
use of Riemann solvers which include both contact and rotational
discontinuities can increase the strength of the magnetic field within the
cocoon by a factor of ten in simulations of RMHD jets, and can increase the
spectral resolution of three-dimensional RMHD turbulence driven by the KHI by a
factor of 2. This increase in accuracy far outweighs the associated increase in
computational cost. Our RMHD scheme is publicly available as part of the Athena
code.Comment: 75 pages, 28 figures, accepted for publication in ApJS. Version with
high resolution figures available from
http://jila.colorado.edu/~krb3u/Athena_SR/rmhd_method_paper.pd
Status of the differential transformation method
Further to a recent controversy on whether the differential transformation
method (DTM) for solving a differential equation is purely and solely the
traditional Taylor series method, it is emphasized that the DTM is currently
used, often only, as a technique for (analytically) calculating the power
series of the solution (in terms of the initial value parameters). Sometimes, a
piecewise analytic continuation process is implemented either in a numerical
routine (e.g., within a shooting method) or in a semi-analytical procedure
(e.g., to solve a boundary value problem). Emphasized also is the fact that, at
the time of its invention, the currently-used basic ingredients of the DTM
(that transform a differential equation into a difference equation of same
order that is iteratively solvable) were already known for a long time by the
"traditional"-Taylor-method users (notably in the elaboration of software
packages --numerical routines-- for automatically solving ordinary differential
equations). At now, the defenders of the DTM still ignore the, though much
better developed, studies of the "traditional"-Taylor-method users who, in
turn, seem to ignore similarly the existence of the DTM. The DTM has been given
an apparent strong formalization (set on the same footing as the Fourier,
Laplace or Mellin transformations). Though often used trivially, it is easily
attainable and easily adaptable to different kinds of differentiation
procedures. That has made it very attractive. Hence applications to various
problems of the Taylor method, and more generally of the power series method
(including noninteger powers) has been sketched. It seems that its potential
has not been exploited as it could be. After a discussion on the reasons of the
"misunderstandings" which have caused the controversy, the preceding topics are
concretely illustrated.Comment: To appear in Applied Mathematics and Computation, 29 pages,
references and further considerations adde
Assessment and calibration of the γ equation transition model for a wide range of Reynolds numbers at low Mach
The numerical simulation of flows over large-scale wind turbine blades without considering the transition
from laminar to fully turbulent flow may result in incorrect estimates of the blade loads and performance.
Thanks to its relative simplicity and promising results, the Local-Correlation based Transition Modelling
concept represents a valid way to include transitional effects into practical CFD simulations. However, the
model involves coefficients to be tuned to match the required application. In this paper, the γ-equation transition model is assessed and calibrated, for a wide range of Reynolds numbers at low Mach, as needed for wind turbine applications. Different airfoils are used to evaluate the original model and calibrate it, whereas a large-scale wind turbine blade is employed to show that the calibrated model can lead to reliable solution for complex three-dimensional flows. The calibrated model shows promising results for both two-dimensional and three-dimensional flows, even if cross-flow instabilities are neglected
Numerical Homogenization of Heterogeneous Fractional Laplacians
In this paper, we develop a numerical multiscale method to solve the
fractional Laplacian with a heterogeneous diffusion coefficient. When the
coefficient is heterogeneous, this adds to the computational costs. Moreover,
the fractional Laplacian is a nonlocal operator in its standard form, however
the Caffarelli-Silvestre extension allows for a localization of the equations.
This adds a complexity of an extra spacial dimension and a singular/degenerate
coefficient depending on the fractional order. Using a sub-grid correction
method, we correct the basis functions in a natural weighted Sobolev space and
show that these corrections are able to be truncated to design a
computationally efficient scheme with optimal convergence rates. A key
ingredient of this method is the use of quasi-interpolation operators to
construct the fine scale spaces. Since the solution of the extended problem on
the critical boundary is of main interest, we construct a projective
quasi-interpolation that has both and dimensional averages over
subsets in the spirit of the Scott-Zhang operator. We show that this operator
satisfies local stability and local approximation properties in weighted
Sobolev spaces. We further show that we can obtain a greater rate of
convergence for sufficient smooth forces, and utilizing a global
projection on the critical boundary. We present some numerical examples,
utilizing our projective quasi-interpolation in dimension for analytic
and heterogeneous cases to demonstrate the rates and effectiveness of the
method
Efficient cosmological parameter sampling using sparse grids
We present a novel method to significantly speed up cosmological parameter
sampling. The method relies on constructing an interpolation of the
CMB-log-likelihood based on sparse grids, which is used as a shortcut for the
likelihood-evaluation. We obtain excellent results over a large region in
parameter space, comprising about 25 log-likelihoods around the peak, and we
reproduce the one-dimensional projections of the likelihood almost perfectly.
In speed and accuracy, our technique is competitive to existing approaches to
accelerate parameter estimation based on polynomial interpolation or neural
networks, while having some advantages over them. In our method, there is no
danger of creating unphysical wiggles as it can be the case for polynomial fits
of a high degree. Furthermore, we do not require a long training time as for
neural networks, but the construction of the interpolation is determined by the
time it takes to evaluate the likelihood at the sampling points, which can be
parallelised to an arbitrary degree. Our approach is completely general, and it
can adaptively exploit the properties of the underlying function. We can thus
apply it to any problem where an accurate interpolation of a function is
needed.Comment: Submitted to MNRAS, 13 pages, 13 figure
- …