2,594 research outputs found

    Uncertainty analysis of depth predictions from seismic reflection data using Bayesian statistics

    Get PDF
    Velocity model building is a critical step in seismic reflection data processing. An optimum velocity field can lead to well focused images in time or depth domains. Taking into account the noisy and band limited nature of the seismic data, the computed velocity field can be considered as our best estimate of a set of possible velocity fields. Hence, all the calculated depths and the images produced are just our best approximation of the true subsurface. This study examines the quantification of uncertainty of the depths to drilling targets from two dimensional (2D) seismic reflection data using Bayesian statistics. The approach was tested in Mentelle Basin (south west of Australia), aiming to make depths predictions for stratigraphic targets of interest related with the International Ocean Discovery Program (IODP), leg 369. For the purposes of the project, Geoscience Australia 2D seismic profiles were reprocessed. In order to achieve robust predictions, the seismic reflection processing sequence was focused on improving the temporal resolution of the data by using deterministic deghosting filters in pre-stack and post-stack domains. The filters, combined with isotropic/anisotropic pre-stack time and depth migration algorithms, produced very good results in terms of seismic resolution and focusing of subsurface features. The application of the deghosting filters was the critical step for the subsequent probabilistic depth estimation of drilling targets. The best estimate of the velocity field along with the migrated seismic data were used as input to the Bayesian algorithm. The analysis, performed in one seismic profile intersecting the site location MBAS-4A, produced robust depth predictions for lithological boundaries of interest compared to the observed depths as reported in the IODP expedition. The significance of the result is more pronounced taking into account the complete lack of independent velocity information. Petrophysical information collected from the expedition was used to perform well-seismic tie, mapping the lithological boundaries with the reflectivity in the seismic profile. A very good match between observed and modelled traces was achieved and a new interpretation of the Mentelle Basin lithological boundaries in seismic image was provided. Velocity information from sonic logs was also implemented to perform anisotropic pre-stack depth migration. The migrated image successfully mapped the subsurface targets to their correct depth location while preserving the focus of the image. The pre-drilling depth estimation of subsurface targets using Bayesian statistics can be considered as a great example of successfully quantifying the uncertainty in depths and effectively merging seismic reflection data processing with statistical analysis. The derived well-seismic tie in MBAS-4A will be a valuable tool towards a more complete regional interpretation of the Mentelle Basin

    Scaling Multidimensional Inference for Big Structured Data

    Get PDF
    In information technology, big data is a collection of data sets so large and complex that it becomes difficult to process using traditional data processing applications [151]. In a world of increasing sensor modalities, cheaper storage, and more data oriented questions, we are quickly passing the limits of tractable computations using traditional statistical analysis methods. Methods which often show great results on simple data have difficulties processing complicated multidimensional data. Accuracy alone can no longer justify unwarranted memory use and computational complexity. Improving the scaling properties of these methods for multidimensional data is the only way to make these methods relevant. In this work we explore methods for improving the scaling properties of parametric and nonparametric models. Namely, we focus on the structure of the data to lower the complexity of a specific family of problems. The two types of structures considered in this work are distributive optimization with separable constraints (Chapters 2-3), and scaling Gaussian processes for multidimensional lattice input (Chapters 4-5). By improving the scaling of these methods, we can expand their use to a wide range of applications which were previously intractable open the door to new research questions

    Analysis of and techniques for adaptive equalization for underwater acoustic communication

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution September 2011Underwater wireless communication is quickly becoming a necessity for applications in ocean science, defense, and homeland security. Acoustics remains the only practical means of accomplishing long-range communication in the ocean. The acoustic communication channel is fraught with difficulties including limited available bandwidth, long delay-spread, time-variability, and Doppler spreading. These difficulties reduce the reliability of the communication system and make high data-rate communication challenging. Adaptive decision feedback equalization is a common method to compensate for distortions introduced by the underwater acoustic channel. Limited work has been done thus far to introduce the physics of the underwater channel into improving and better understanding the operation of a decision feedback equalizer. This thesis examines how to use physical models to improve the reliability and reduce the computational complexity of the decision feedback equalizer. The specific topics covered by this work are: how to handle channel estimation errors for the time varying channel, how to use angular constraints imposed by the environment into an array receiver, what happens when there is a mismatch between the true channel order and the estimated channel order, and why there is a performance difference between the direct adaptation and channel estimation based methods for computing the equalizer coefficients. For each of these topics, algorithms are provided that help create a more robust equalizer with lower computational complexity for the underwater channel.This work would not have been possible without support from the O ce of Naval Research, through a Special Research Award in Acoustics Graduate Fellowship (ONR Grant #N00014-09-1-0540), with additional support from ONR Grant #N00014-05- 10085 and ONR Grant #N00014-07-10184

    Split-domain calibration of an ecosystem model using satellite ocean colour data

    Get PDF
    The application of satellite ocean colour data to the calibration of plankton ecosystem models for large geographic domains, over which their ideal parameters cannot be assumed to be invariant, is investigated. A method is presented for seeking the number and geographic scope of parameter sets which allows the best fit to validation data to be achieved. These are independent data not used in the parameter estimation process. The goodness-of-fit of the optimally calibrated model to the validation data is an objective measure of merit for the model, together with its external forcing data. Importantly, this is a statistic which can be used for comparative evaluation of different models. The method makes use of observations from multiple locations, referred to as stations, distributed across the geographic domain. It relies on a technique for finding groups of stations which can be aggregated for parameter estimation purposes with minimal increase in the resulting misfit between model and observations.The results of testing this split-domain calibration method for a simple zero dimensional model, using observations from 30 stations in the North Atlantic, are presented. The stations are divided into separate calibration and validation sets. One year of ocean colour data from each station were used in conjunction with a climatological estimate of the station’s annual nitrate maximum. The results demonstrate the practical utility of the method and imply that an optimal fit of the model to the validation data would be given by two parameter sets. The corresponding division of the North Atlantic domain into two provinces allows a misfit-based cost to be achieved which is 25% lower than that for the single parameter set obtained using all of the calibration stations. In general, parameters are poorly constrained, contributing to a high degree of uncertainty in model output for unobserved variables. This suggests that limited progress towards a definitive model calibration can be made without including other types of observations

    Advanced multilateration theory, software development, and data processing: The MICRODOT system

    Get PDF
    The process of geometric parameter estimation to accuracies of one centimeter, i.e., multilateration, is defined and applications are listed. A brief functional explanation of the theory is presented. Next, various multilateration systems are described in order of increasing system complexity. Expected systems accuracy is discussed from a general point of view and a summary of the errors is listed. An outline of the design of a software processing system for multilateration, called MICRODOT, is presented next. The links of this software, which can be used for multilateration data simulations or operational data reduction, are examined on an individual basis. Functional flow diagrams are presented to aid in understanding the software capability. MICRODOT capability is described with respect to vehicle configurations, interstation coordinate reduction, geophysical parameter estimation, and orbit determination. Numerical results obtained from MICRODOT via data simulations are displayed both for hypothetical and real world vehicle/station configurations such as used in the GEOS-3 Project. These simulations show the inherent power of the multilateration procedure
    corecore