53,012 research outputs found
Multi-scale uncertainty quantification in geostatistical seismic inversion
Geostatistical seismic inversion is commonly used to infer the spatial
distribution of the subsurface petro-elastic properties by perturbing the model
parameter space through iterative stochastic sequential
simulations/co-simulations. The spatial uncertainty of the inferred
petro-elastic properties is represented with the updated a posteriori variance
from an ensemble of the simulated realizations. Within this setting, the
large-scale geological (metaparameters) used to generate the petro-elastic
realizations, such as the spatial correlation model and the global a priori
distribution of the properties of interest, are assumed to be known and
stationary for the entire inversion domain. This assumption leads to
underestimation of the uncertainty associated with the inverted models. We
propose a practical framework to quantify uncertainty of the large-scale
geological parameters in seismic inversion. The framework couples
geostatistical seismic inversion with a stochastic adaptive sampling and
Bayesian inference of the metaparameters to provide a more accurate and
realistic prediction of uncertainty not restricted by heavy assumptions on
large-scale geological parameters. The proposed framework is illustrated with
both synthetic and real case studies. The results show the ability retrieve
more reliable acoustic impedance models with a more adequate uncertainty spread
when compared with conventional geostatistical seismic inversion techniques.
The proposed approach separately account for geological uncertainty at
large-scale (metaparameters) and local scale (trace-by-trace inversion)
Many-Task Computing and Blue Waters
This report discusses many-task computing (MTC) generically and in the
context of the proposed Blue Waters systems, which is planned to be the largest
NSF-funded supercomputer when it begins production use in 2012. The aim of this
report is to inform the BW project about MTC, including understanding aspects
of MTC applications that can be used to characterize the domain and
understanding the implications of these aspects to middleware and policies.
Many MTC applications do not neatly fit the stereotypes of high-performance
computing (HPC) or high-throughput computing (HTC) applications. Like HTC
applications, by definition MTC applications are structured as graphs of
discrete tasks, with explicit input and output dependencies forming the graph
edges. However, MTC applications have significant features that distinguish
them from typical HTC applications. In particular, different engineering
constraints for hardware and software must be met in order to support these
applications. HTC applications have traditionally run on platforms such as
grids and clusters, through either workflow systems or parallel programming
systems. MTC applications, in contrast, will often demand a short time to
solution, may be communication intensive or data intensive, and may comprise
very short tasks. Therefore, hardware and software for MTC must be engineered
to support the additional communication and I/O and must minimize task dispatch
overheads. The hardware of large-scale HPC systems, with its high degree of
parallelism and support for intensive communication, is well suited for MTC
applications. However, HPC systems often lack a dynamic resource-provisioning
feature, are not ideal for task communication via the file system, and have an
I/O system that is not optimized for MTC-style applications. Hence, additional
software support is likely to be required to gain full benefit from the HPC
hardware
- …