1,177 research outputs found
A Primal-Dual Proximal Algorithm for Sparse Template-Based Adaptive Filtering: Application to Seismic Multiple Removal
Unveiling meaningful geophysical information from seismic data requires to
deal with both random and structured "noises". As their amplitude may be
greater than signals of interest (primaries), additional prior information is
especially important in performing efficient signal separation. We address here
the problem of multiple reflections, caused by wave-field bouncing between
layers. Since only approximate models of these phenomena are available, we
propose a flexible framework for time-varying adaptive filtering of seismic
signals, using sparse representations, based on inaccurate templates. We recast
the joint estimation of adaptive filters and primaries in a new convex
variational formulation. This approach allows us to incorporate plausible
knowledge about noise statistics, data sparsity and slow filter variation in
parsimony-promoting wavelet frames. The designed primal-dual algorithm solves a
constrained minimization problem that alleviates standard regularization issues
in finding hyperparameters. The approach demonstrates significantly good
performance in low signal-to-noise ratio conditions, both for simulated and
real field seismic data
A novel prestack sparse azimuthal AVO inversion
In this paper we demonstrate a new algorithm for sparse prestack azimuthal
AVO inversion. A novel Euclidean prior model is developed to at once respect
sparseness in the layered earth and smoothness in the model of reflectivity.
Recognizing that methods of artificial intelligence and Bayesian computation
are finding an every increasing role in augmenting the process of
interpretation and analysis of geophysical data, we derive a generalized
matrix-variate model of reflectivity in terms of orthogonal basis functions,
subject to sparse constraints. This supports a direct application of machine
learning methods, in a way that can be mapped back onto the physical principles
known to govern reflection seismology. As a demonstration we present an
application of these methods to the Marcellus shale. Attributes extracted using
the azimuthal inversion are clustered using an unsupervised learning algorithm.
Interpretation of the clusters is performed in the context of the Ruger model
of azimuthal AVO
Compressive Wave Computation
This paper considers large-scale simulations of wave propagation phenomena.
We argue that it is possible to accurately compute a wavefield by decomposing
it onto a largely incomplete set of eigenfunctions of the Helmholtz operator,
chosen at random, and that this provides a natural way of parallelizing wave
simulations for memory-intensive applications.
This paper shows that L1-Helmholtz recovery makes sense for wave computation,
and identifies a regime in which it is provably effective: the one-dimensional
wave equation with coefficients of small bounded variation. Under suitable
assumptions we show that the number of eigenfunctions needed to evolve a sparse
wavefield defined on N points, accurately with very high probability, is
bounded by C log(N) log(log(N)), where C is related to the desired accuracy and
can be made to grow at a much slower rate than N when the solution is sparse.
The PDE estimates that underlie this result are new to the authors' knowledge
and may be of independent mathematical interest; they include an L1 estimate
for the wave equation, an estimate of extension of eigenfunctions, and a bound
for eigenvalue gaps in Sturm-Liouville problems.
Numerical examples are presented in one spatial dimension and show that as
few as 10 percents of all eigenfunctions can suffice for accurate results.
Finally, we argue that the compressive viewpoint suggests a competitive
parallel algorithm for an adjoint-state inversion method in reflection
seismology.Comment: 45 pages, 4 figure
Prediction of Large Events on a Dynamical Model of a Fault
We present results for long term and intermediate term prediction algorithms
applied to a simple mechanical model of a fault. We use long term prediction
methods based, for example, on the distribution of repeat times between large
events to establish a benchmark for predictability in the model. In comparison,
intermediate term prediction techniques, analogous to the pattern recognition
algorithms CN and M8 introduced and studied by Keilis-Borok et al., are more
effective at predicting coming large events. We consider the implications of
several different quality functions Q which can be used to optimize the
algorithms with respect to features such as space, time, and magnitude windows,
and find that our results are not overly sensitive to variations in these
algorithm parameters. We also study the intrinsic uncertainties associated with
seismicity catalogs of restricted lengths.Comment: 33 pages, plain.tex with special macros include
- …