5,356 research outputs found

    Snow stratigraphic heterogeneity within ground-based passive microwave radiometer footprints: implications for emission modeling

    Get PDF
    Two-dimensional measurements of snowpack properties (stratigraphic layering, density, grain size and temperature) were used as inputs to the multi-layer Helsinki University of Technology (HUT) microwave emission model at a centimeter-scale horizontal resolution, across a 4.5 m transect of ground-based passive microwave radiometer footprints near Churchill, Manitoba, Canada. Snowpack stratigraphy was complex (between six and eight layers) with only three layers extending continuously throughout the length of the transect. Distributions of one-dimensional simulations, accurately representing complex stratigraphic layering, were evaluated using measured brightness temperatures. Large biases (36 to 68 K) between simulated and measured brightness temperatures were minimized (-0.5 to 0.6 K), within measurement accuracy, through application of grain scaling factors (2.6 to 5.3) at different combinations of frequencies, polarizations and model extinction coefficients. Grain scaling factors compensated for uncertainty relating optical SSA to HUT effective grain size inputs and quantified relative differences in scattering and absorption properties of various extinction coefficients. The HUT model required accurate representation of ice lenses, particularly at horizontal polarization, and large grain scaling factors highlighted the need to consider microstructure beyond the size of individual grains. As variability of extinction coefficients was strongly influenced by the proportion of large (hoar) grains in a vertical profile, it is important to consider simulations from distributions of one-dimensional profiles rather than single profiles, especially in sub-Arctic snowpacks where stratigraphic variability can be high. Model sensitivity experiments suggested the level of error in field measurements and the new methodological framework used to apply them in a snow emission model were satisfactory. Layer amalgamation showed a three-layer representation of snowpack stratigraphy reduced the bias of a one-layer representation by about 50%

    A deterministic algorithm for experimental design applied to tomographic and microseismic monitoring surveys

    Get PDF
    SUMMARY Most general experimental design algorithms are either: (i) stochastic and hence give different designs each time they are run with finite computing power, or (ii) deterministic but converge to results that depend on an initial or reference design, taking little or no account of the range of all other possible designs. In this paper we introduce an approximation to standard measures of experimental design quality that enables a new algorithm to be used. The algorithm is simple, deterministic and the resulting experimental design is influenced by the full range of possible designs, thus addressing problems (i) and (ii) above. Although the designs produced are not guaranteed to be globally optimal, they significantly increase the magnitude of small eigenvalues in the model–data relationship (without requiring that these eigenvalues be calculated). This reduces the model uncertainties expected post-experiment. We illustrate the method on simple tomographic and microseismic location examples with varying degrees of seismic attenuation

    Characterisation of the transmissivity field of a fractured and karstic aquifer, Southern France

    Get PDF
    International audienceGeological and hydrological data collected at the Terrieu experimental site north of Montpellier, in a confined carbonate aquifer indicates that both fracture clusters and a major bedding plane form the main flow paths of this highly heterogeneous karst aquifer. However, characterising the geometry and spatial location of the main flow channels and estimating their flow properties remain difficult. These challenges can be addressed by solving an inverse problem using the available hydraulic head data recorded during a set of interference pumping tests.We first constructed a 2D equivalent porous medium model to represent the test site domain and then employed regular zoning parameterisation, on which the inverse modelling was performed. Because we aim to resolve the fine-scale characteristics of the transmissivity field, the problem undertaken is essentially a large-scale inverse model, i.e. the dimension of the unknown parameters is high. In order to deal with the high computational demands in such a large-scale inverse problem, a gradient-based, non-linear algorithm (SNOPT) was used to estimate the transmissivity field on the experimental site scale through the inversion of steady-state, hydraulic head measurements recorded at 22 boreholes during 8 sequential cross-hole pumping tests. We used the data from outcrops, borehole fracture measurements and interpretations of inter-well connectivities from interference test responses as initial models to trigger the inversion. Constraints for hydraulic conductivities, based on analytical interpretations of pumping tests, were also added to the inversion models. In addition, the efficiency of the adopted inverse algorithm enables us to increase dramatically the number of unknown parameters to investigate the influence of elementary discretisation on the reconstruction of the transmissivity fields in both synthetic and field studies.By following the above approach, transmissivity fields that produce similar hydrodynamic behaviours to the real head measurements were obtained. The inverted transmissivity fields show complex, spatial heterogeneities with highly conductive channels embedded in a low transmissivity matrix region. The spatial trend of the main flow channels is in a good agreement with that of the main fracture sets mapped on outcrops in the vicinity of the Terrieu site suggesting that the hydraulic anisotropy is consistent with the structural anisotropy. These results from the inverse modelling enable the main flow paths to be located and their hydrodynamic properties to be estimated

    Finite-frequency tomography with complex body waves

    Get PDF
    Seismische Tomographie ist die eindrücklichste und intuitivste Methode, Informationen über das tiefe Erdinnere, von der Kruste bis an die Kern-Mantel-Grenze zu erlangen. Die von entfernten Erdbeben aufgezeichneten Bodenbewegungen werden mit den für ein einfaches Erdmodell vorhergesagten verglichen, um ein verbessertes Modell zu erhalten. Dieses dreidimensionale Modell kann dann geodynamisch oder tektonisch interpretiert werden. Durch die Entwicklung leistungsfähiger Computersysteme kann die Ausbreitung seismischer Wellen mittlerweile im gesamten messbaren Frequenzbereich simuliert werden, sodass dieses gesamte Spektrum der Tomographie zur Verfügung steht. Die vorliegende Arbeit beschäftigt sich mit der Verbesserung der Wellenformtomographie. Zum einen wird die Nutzbarkeit eines komplexen Typs seismischer Wellen, der in der Mantelübergangszone zwischen 410 und 660 km Tiefe gestreuten triplizierten Wellen ge-zeigt. Diese Wellen versprechen eine erheblich bessere Auflösung der geodynamisch wichtigen Diskontinuitäten zwischen oberem und unterem Mantel als bisher verwendete teleseismische Wellen. Zum anderen wird der nichtlineare Einfluss des Erdbebenmodells auf die Wellenformtomographie untersucht. Mittels Bayesianischer Inferenz werden Wahrscheinlichkeitsdichten für die Herdparameter des Erdbebens, wie Tiefe, Momententensor und Quellfunktion bestimmt. Dazu wird zuvor ein Modell der Messunsicherheit und des Modellierungsfehlers in der Herdinversion bestimmt, das bis dato nicht vorlag. Dabei zeigt sich im Weiteren, dass der Effekt der Unsicherheit im Herdmodell eine nichtlineare und bisher weitgehend ignorierte Feh-lerquelle in der seismischen Tomographie ist. Dieses Ergebnis ermöglicht es, die Varianz seismischer Laufzeit- und Wellenformmessungen sowie die Kovarianz zwischen einzelnen Messstationen zu bestimmen. Die Ergebnisse dieser Arbeit können in Zukunft erheblich dazu beitragen, die Unsicherheiten der seismischen Tomographie quantitativ zu bestimmen, um eventuell vorhandene Artefakte zu zeigen und damit geologischen Fehlinterpretationen tomographischer Ergebnisse vorzubeugen.Seismic tomography is the most impressive method of inferring a picture of the deep interiour of the Earth, from the lower crust to the core mantle boundary. Recordings of ground motions caused by distant earthquakes are used to refine an existing earth model, employing difference between measured and predicted data. The resulting three-dimensional models and images can be interpreted in terms of tectonics and large-scale geodynamics. The increase in computing power in the last decade has lead to an enormous progress in tomographic methods, which can now simulate and therefore exploit the whole frequency range of seismographic measurements. This thesis refines waveform tomography in its flavour of finite-frequency tomography. It first shows that complex wave types, like the those perturbed by the discontinuities in the mantle transition zone can be used for waveform tomography. Using these waves promise an improved resolution of the geodynamically important transition zone compared to the hitherto used teleseismic waves. A second part checks the nonlinear influence of the source model on waveform tomography. By the method of Bayesian inference, probability density functions of the source parameters depth, moment tensor, and the source time function are determined. For that, a model of the measurement uncertainties is necessary, which was hitherto not available and is derived from a large catalogue of source solutions. The results of the probabilistic source inversion allow to quantify the effect of source uncertainty on seismic tomography. This allows to estimate the variance of seismic travel-times and waveforms and also the covariance between different seismographic stations. The results of this work could improve uncertainty estimation in seismic tomography, show potential artifacts in the result and therefore avoid misinterpretation of tomographic images by geologists and others

    Source and dynamics of a volcanic caldera unrest : Campi Flegrei, 1983–84

    Get PDF
    Acknowledgements We thank Tiziana Vanorio, Antonella Amoruso, Luca Crescentini, Nicholas Rawlinson, Yasuko Takei, and David Cornwell for the valuable suggestions regarding the methodology and interpretation. Reviews from Tim Greenfield and two anonymous reviewers helped improving both clarity of the manuscript and interpretation. The Royal Society of Edinburgh - Accademia dei Lincei Bilateral Agreement, the Santander Mobility Award of the College of Physical Sciences, University of Aberdeen, and the TIDES EU COST action granted L.D.S. travel grants for the realisation of this study. E.D.P. has been supported by the EPHESTO and KNOWAVES projects, funded by the Spanish Ministry of Education and Science.Peer reviewedPublisher PD

    Regional study of the Archean to Proterozoic crust at the Sudbury Neutrino Observatory (SNO+), Ontario: Predicting the geoneutrino flux

    Full text link
    The SNO+ detector, a new kiloton scale liquid scintillator detector capable of recording geoneutrino events, will define the strength of the Earth radiogenic heat. A detailed 3-D model of the regional crust, centered at SNO+ and based on compiled geological, geophysical and geochemical information, was used to characterize the physical and chemical attributes of crust and assign uncertainties to its structure. Monte Carlo simulations were used to predict the U and Th abundances and uncertainties in crustal lithologies and to model the regional crustal geoneutrino signal originating from the at SNO+

    Seismic scattering and absorption mapping from intermediate-depth earthquakes reveals complex tectonic interactions acting in the Vrancea region and surroundings (Romania)

    Get PDF
    The present study was performed during a stay at the University of Münster financed by a grant awarded by the German Academic Exchange Service (DAAD) in 2014. Data used in the present study were provided by the National Institute for Earth Physics (Romania) and processed within the National Data Centre in Magurele. Seismic Analysis Code (SAC) (Goldstein and Snoke, 2005) and GMT (Wessel et al., 2013) codes were used. We thank the College of Physical Sciences (University of Aberdeen) and the Santander Mobility Award for providing travel grant to LDS to complete this manuscript. We are grateful as well to the anonymous reviewer for his useful remarks which helped us to improve the paper.Peer reviewedPostprin

    The 11 March 2011 Tohoku tsunami wavefront mapping across offshore Southern California

    Get PDF
    The 11 March 2011 (M_w = 9.0) Tohoku tsunami was recorded by a temporary array of seafloor pressure gauges deployed off the coast of Southern California, demonstrating how dense array data can illustrate and empirically validate predictions of linear tsunami wave propagation characteristics. A noise cross-correlation method was used to first correct for the pressure gauge instrument phase response. Phase and group travel times were then measured for the first arrival in the pressure gauge tsunami waveforms filtered in narrow bands around 30 periods between 200 and 3000 s. For each period, phase velocities were estimated across the pressure gauge array based on the phase travel time gradient using eikonal tomography. Clear correlation was observed between the phase velocity and long-wavelength bathymetry variations where fast and slow velocities occurred for deep and shallow water regions, respectively. In particular, velocity gradients are pronounced at the Patton Escarpment and near island plateaus due to the abrupt bathymetry change. In the deep open ocean area, clear phase velocity dispersion is observed. Comparison with numerically calculated tsunami waveforms validates the approach and provides an independent measure of the finite-frequency effect on phase velocities at long periods

    Enhancing the information content of geophysical data for nuclear site characterisation

    Get PDF
    Our knowledge and understanding to the heterogeneous structure and processes occurring in the Earth’s subsurface is limited and uncertain. The above is true even for the upper 100m of the subsurface, yet many processes occur within it (e.g. migration of solutes, landslides, crop water uptake, etc.) are important to human activities. Geophysical methods such as electrical resistivity tomography (ERT) greatly improve our ability to observe the subsurface due to their higher sampling frequency (especially with autonomous time-lapse systems), larger spatial coverage and less invasive operation, in addition to being more cost-effective than traditional point-based sampling. However, the process of using geophysical data for inference is prone to uncertainty. There is a need to better understand the uncertainties embedded in geophysical data and how they translate themselves when they are subsequently used, for example, for hydrological or site management interpretations and decisions. This understanding is critical to maximize the extraction of information in geophysical data. To this end, in this thesis, I examine various aspects of uncertainty in ERT and develop new methods to better use geophysical data quantitatively. The core of the thesis is based on two literature reviews and three papers. In the first review, I provide a comprehensive overview of the use of geophysical data for nuclear site characterization, especially in the context of site clean-up and leak detection. In the second review, I survey the various sources of uncertainties in ERT studies and the existing work to better quantify or reduce them. I propose that the various steps in the general workflow of an ERT study can be viewed as a pipeline for information and uncertainty propagation and suggested some areas have been understudied. One of these areas is measurement errors. In paper 1, I compare various methods to estimate and model ERT measurement errors using two long-term ERT monitoring datasets. I also develop a new error model that considers the fact that each electrode is used to make multiple measurements. In paper 2, I discuss the development and implementation of a new method for geoelectrical leak detection. While existing methods rely on obtaining resistivity images through inversion of ERT data first, the approach described here estimates leak parameters directly from raw ERT data. This is achieved by constructing hydrological models from prior site information and couple it with an ERT forward model, and then update the leak (and other hydrological) parameters through data assimilation. The approach shows promising results and is applied to data from a controlled injection experiment in Yorkshire, UK. The approach complements ERT imaging and provides a new way to utilize ERT data to inform site characterisation. In addition to leak detection, ERT is also commonly used for monitoring soil moisture in the vadose zone, and increasingly so in a quantitative manner. Though both the petrophysical relationships (i.e., choices of appropriate model and parameterization) and the derived moisture content are known to be subject to uncertainty, they are commonly treated as exact and error‐free. In paper 3, I examine the impact of uncertain petrophysical relationships on the moisture content estimates derived from electrical geophysics. Data from a collection of core samples show that the variability in such relationships can be large, and they in turn can lead to high uncertainty in moisture content estimates, and they appear to be the dominating source of uncertainty in many cases. In the closing chapters, I discuss and synthesize the findings in the thesis within the larger context of enhancing the information content of geophysical data, and provide an outlook on further research in this topic
    corecore