711 research outputs found

    Wave-equation based seismic multiple attenuation

    No full text
    Reflection seismology is widely used to map the subsurface geological structure of the Earth. Seismic multiples can contaminate seismic data and are therefore due to be removed. For seismic multiple attenuation, wave-equation based methods are proved to be effective in most cases, which involve two aspects: multiple prediction and multiple subtraction. Targets of both aspects are to develop and apply a fully datadriven algorithm for multiple prediction, and a robust technique for multiple subtraction. Based on many schemes developed by others regarding to the targets, this thesis addresses and tackles the problems of wave-equation based seismic multiple attenuation by several approaches. First, the issue of multiple attenuation in land seismic data is discussed. Multiple Prediction through Inversion (MPTI) method is expanded to be applied in the poststack domain and in the CMP domain to handle the land data with low S/N ratio, irregular geometry and missing traces. A running smooth filter and an adaptive threshold K-NN (nearest neighbours) filter are proposed to help to employ MPTI on land data in the shot domain. Secondly, the result of multiple attenuation depends much upon the effectiveness of the adaptive subtraction. The expanded multi-channel matching (EMCM) filter is proved to be effective. In this thesis, several strategies are discussed to improve the result of EMCM. Among them, to model and subtract the multiples according to their orders is proved to be practical in enhancing the effect of EMCM, and a masking filter is adopted to preserve the energy of primaries. Moreover, an iterative application of EMCM is proposed to give the optimized result. Thirdly, with the limitation of current 3D seismic acquisition geometries, the sampling in the crossline direction is sparse. This seriously affects the application of the 3D multiple attenuation. To tackle the problem, a new approach which applies a trajectory stacking Radon transform along with the energy spectrum is proposed in this thesis. It can replace the time-consuming time-domain sparse inversion with similar effectiveness and much higher efficiency. Parallel computing is discussed in the thesis so as to enhance the efficiency of the strategies. The Message-Passing Interface (MPI) environment is implemented in most of the algorithms mentioned above and greatly improves the efficiency

    Andean Hydrothermal and Structural System Dynamics: Insights from 3D Magnetotelluric Inverse Modelling

    Get PDF
    In an active volcanic arc, magmatically sourced fluids are channelled through the brittle crust by structural features inherent in the lithological setting. This interaction is observed in the Andean volcanic mountain chain, where volcanoes, geothermal springs and major mineral deposits are spatially coherent with first-order NNE oriented thrust fault systems, and convergent-margin oblique WNW striking Andean Transverse Faults (ATF). The volcanic and hydrothermal activity at Tinguiririca and Planchón-Peteroa volcanoes demonstrate this relationship, as both volcanic complexes and their spatially associated thermal springs show strike alignment to the outcropping NNE oriented El Fierro thrust fault system. This study aims to constrain the 3D architecture of this fault system in the proximity of the volcanoes and its interaction with volcanically sourced hydrothermal fluids from a combined magnetotelluric (MT) and seismic field study. Data from a 24 station broadband magnetotelluric survey were interpreted using 3D inversion. Over 700 seismic hypocentres from a 12 station coeval seismic survey are also presented in support of the final 3D conductivity model. The combined results show a correlation of conductivity anomalies with seismic clusters in the top 10 km of the crust, including a distinct seismogenic WNW oriented feature that occurs at an abrupt electrical conductivity contrast, which is most apparent at a 6 km depth. It is concluded that this discrete feature is an Andean Transverse Fault (ATF), and that the conductors are signatures of either geothermal fluid reservoirs or fluid saturated lithologies at depth. The associated fluids are channelled parallel to the margin-oblique ATF plane and cause fault reactivation due to increased pore fluid pressure acting on the fault plane. Seismicity induced by this mechanism is limited to the east of the El Fierro fault system, as fluids are compartmentalized along the footwall due to the low permeability fault core that prevent cross-fault fluid migration. This study thus contributes novel insight into how WNW oriented AFT systems interact with local volcanic, structural and hydrothermal systems

    The IPAC Image Subtraction and Discovery Pipeline for the intermediate Palomar Transient Factory

    Get PDF
    We describe the near real-time transient-source discovery engine for the intermediate Palomar Transient Factory (iPTF), currently in operations at the Infrared Processing and Analysis Center (IPAC), Caltech. We coin this system the IPAC/iPTF Discovery Engine (or IDE). We review the algorithms used for PSF-matching, image subtraction, detection, photometry, and machine-learned (ML) vetting of extracted transient candidates. We also review the performance of our ML classifier. For a limiting signal-to-noise ratio of 4 in relatively unconfused regions, "bogus" candidates from processing artifacts and imperfect image subtractions outnumber real transients by ~ 10:1. This can be considerably higher for image data with inaccurate astrometric and/or PSF-matching solutions. Despite this occasionally high contamination rate, the ML classifier is able to identify real transients with an efficiency (or completeness) of ~ 97% for a maximum tolerable false-positive rate of 1% when classifying raw candidates. All subtraction-image metrics, source features, ML probability-based real-bogus scores, contextual metadata from other surveys, and possible associations with known Solar System objects are stored in a relational database for retrieval by the various science working groups. We review our efforts in mitigating false-positives and our experience in optimizing the overall system in response to the multitude of science projects underway with iPTF.Comment: 66 pages, 21 figures, 7 tables, accepted by PAS

    Seismic surface wave focal spot imaging : numerical resolution experiments

    Get PDF
    Numerical experiments of seismic wave propagation in a laterally homogeneous layered medium explore subsurface imaging at subwavelength distances for dense seismic arrays. We choose a time-reversal approach to simulate fundamental mode Rayleigh surface wavefields that are equivalent to the cross-correlation results of three-component ambient seismic field records. We demonstrate that the synthesized 2-D spatial autocorrelation fields in the time domain support local or so-called focal spot imaging. Systematic tests involving clean isotropic surface wavefields but also interfering body wave components and anisotropic incidence assess the accuracy of the phase velocity and dispersion estimates obtained from focal spot properties. The results suggest that data collected within half a wavelength around the origin is usually sufficient to constrain the used Bessel functions models. Generally, the cleaner the surface wavefield the smaller the fitting distances that can be used to accurately estimate the local Rayleigh wave speed. Using models based on isotropic surface wave propagation we find that phase velocity estimates from vertical-radial component data are less biased by P-wave energy compared to estimates obtained from vertical-vertical component data, that even strong anisotropic surface wave incidence yields phase velocity estimates with an accuracy of 1 per cent or better, and that dispersion can be studied in the presence of noise. Estimates using a model to resolve potential medium anisotropy are significantly biased by anisotropic surface wave incidence. The overall accurate results obtained from near-field measurements using isotropic medium assumptions imply that dense array seismic Rayleigh wave focal spot imaging can increase the depth sensitivity compared to ambient noise surface wave tomography. The analogy to elastography focal spot medical imaging implies that a high station density and clean surface wavefields support subwavelength resolution of lateral medium variations.Peer reviewe

    Audio Mastering as a Musical Competency

    Get PDF
    In this dissertation, I demonstrate that audio mastering is a musical competency by elucidating the most significant, and clearly audible, facets of this competence. In fact, the mastering process impacts traditionally valued musical aspects of records, such as timbre and dynamics. By applying the emerging creative scholarship method used within the field of music production studies, this dissertation will aid scholars seeking to hear and understand audio mastering by elucidating its core practices as musical endeavours. And, in so doing, I hope to enable increased clarity and accuracy in future scholarly discussions on the topic of audio mastering, as well as the end product of the mastering process: records. Audio mastering produces a so-called master of a record, that is, a finished version of a record optimized for duplication and distribution via available formats (i.e, vinyl LP, audio cassette, compact disc, mp3, wav, and so on). This musical process plays a crucial role in determining how records finally sound, and it is not, as is so often inferred in research, the sole concern of a few technicians working in isolated rooms at a record label\u27s corporate headquarters. In fact, as Mark Cousins and Russ Hepworth-Sawyer (2013: 2) explain, nowadays “all musicians and engineers, to a lesser or greater extent, have to actively engage in the mastering process.” Thus, this dissertation clarifies the creative nature of audio mastering through an investigation of how mastering engineers hear records, and how they use technology to achieve the sonic goals they conceptualize

    On the Effectiveness of Video Recolouring as an Uplink-model Video Coding Technique

    Get PDF
    For decades, conventional video compression formats have advanced via incremental improvements with each subsequent standard achieving better rate-distortion (RD) efficiency at the cost of increased encoder complexity compared to its predecessors. Design efforts have been driven by common multi-media use cases such as video-on-demand, teleconferencing, and video streaming, where the most important requirements are low bandwidth and low video playback latency. Meeting these requirements involves the use of computa- tionally expensive block-matching algorithms which produce excellent compression rates and quick decoding times. However, emerging use cases such as Wireless Video Sensor Networks, remote surveillance, and mobile video present new technical challenges in video compression. In these scenarios, the video capture and encoding devices are often power-constrained and have limited computational resources available, while the decoder devices have abundant resources and access to a dedicated power source. To address these use cases, codecs must be power-aware and offer a reasonable trade-off between video quality, bitrate, and encoder complexity. Balancing these constraints requires a complete rethinking of video compression technology. The uplink video-coding model represents a new paradigm to address these low-power use cases, providing the ability to redistribute computational complexity by offloading the motion estimation and compensation steps from encoder to decoder. Distributed Video Coding (DVC) follows this uplink model of video codec design, and maintains high quality video reconstruction through innovative channel coding techniques. The field of DVC is still early in its development, with many open problems waiting to be solved, and no defined video compression or distribution standards. Due to the experimental nature of the field, most DVC codec to date have focused on encoding and decoding the Luma plane only, which produce grayscale reconstructed videos. In this thesis, a technique called “video recolouring” is examined as an alternative to DVC. Video recolour- ing exploits the temporal redundancies between colour planes, reducing video bitrate by removing Chroma information from specific frames and then recolouring them at the decoder. A novel video recolouring algorithm called Motion-Compensated Recolouring (MCR) is proposed, which uses block motion estimation and bi-directional weighted motion-compensation to reconstruct Chroma planes at the decoder. MCR is used to enhance a conventional base-layer codec, and shown to reduce bitrate by up to 16% with only a slight decrease in objective quality. MCR also outperforms other video recolouring algorithms in terms of objective video quality, demonstrating up to 2 dB PSNR improvement in some cases

    A probabilistic model of chronological errors in layer-counted climate proxies: applications to annually banded coral archives

    Get PDF
    The ability to precisely date climate proxies is central to the reconstruction of past climate variations. To a degree, all climate proxies are affected by age uncertainties, which are seldom quantified. This article proposes a probabilistic age model for proxies based on layer-counted chronologies, and explores its use for annually banded coral archives. The model considers both missing and doubly counted growth increments (represented as independent processes), accommodates various assumptions about error rates, and allows one to quantify the impact of chronological uncertainties on different diagnostics of variability. In the case of a single coral record, we find that time uncertainties primarily affect high-frequency signals but also significantly bias the estimate of decadal signals. We further explore tuning to an independent, tree-ring-based chronology as a way to identify an optimal age model. A synthetic pseudocoral network is used as testing ground to quantify uncertainties in the estimation of spatiotemporal patterns of variability. Even for small error rates, the amplitude of multidecadal variability is systematically overestimated at the expense of interannual variability (El Niño–Southern Oscillation, or ENSO, in this case), artificially flattening its spectrum at periods longer than 10 years. An optimization approach to correct chronological errors in coherent multivariate records is presented and validated in idealized cases, though it is found difficult to apply in practice due to the large number of solutions. We close with a discussion of possible extensions of this model and connections to existing strategies for modeling age uncertainties

    Decimation Testing of High Density Seismic Data for the Wichita Mountain Front Area

    Get PDF
    Several decimation tests were conducted on an originally high trace density survey. This was conducted to assess the capability of lowering collection costs by reducing the field effort. Each volume was subjected to a series of tests to show the amount of degradation of the data. In the area of processing, which included velocity analysis and residual statics, the decimation volumes performed very well. Empirical comparisons of seismic cross sections and time slices showed that the clarity of certain reflectors was considerably compromised in all decimated volumes. Though the use of volume difference calculations, the decimated receiver proved to be the most similar to the reference volume while the decimate shot and receiver showed the most contrast. Overall the decimated receiver seemed to come the closest to replicating the reference volume in every tests, but even signs of degradation were still evident.Boone Pickens School of Geolog

    Systematic errors in cosmic microwave background polarization measurements

    Get PDF
    We investigate the impact of instrumental systematic errors on the potential of cosmic microwave background polarization experiments targeting primordial B-modes. To do so, we introduce spin-weighted Muller matrix-valued fields describing the linear response of the imperfect optical system and receiver, and give a careful discussion of the behaviour of the induced systematic effects under rotation of the instrument. We give the correspondence between the matrix components and known optical and receiver imperfections, and compare the likely performance of pseudo-correlation receivers and those that modulate the polarization with a half-wave plate. The latter is shown to have the significant advantage of not coupling the total intensity into polarization for perfect optics, but potential effects like optical distortions that may be introduced by the quasi-optical wave plate warrant further investigation. A fast method for tolerancing time-invariant systematic effects is presented, which propagates errors through to power spectra and cosmological parameters. The method extends previous studies to an arbitrary scan strategy, and eliminates the need for time-consuming Monte-Carlo simulations in the early phases of instrument and survey design. We illustrate the method with both simple parametrized forms for the systematics and with beams based on physical-optics simulations. Example results are given in the context of next-generation experiments targeting tensor-to-scalar ratios r ~ 0.01.Comment: 19 pages, 7 figures; Minor changes to match version accepted by MNRA
    corecore