46 research outputs found

    Seismic characterisation based on time-frequency spectral analysis

    Get PDF
    We present high-resolution time-frequency spectral analysis schemes to better resolve seismic images for the purpose of seismic and petroleum reservoir characterisation. Seismic characterisation is based on the physical properties of the Earth's subsurface media, and these properties are represented implicitly by seismic attributes. Because seismic traces originally presented in the time domain are non-stationary signals, for which the properties vary with time, we characterise those signals by obtaining seismic attributes which are also varying with time. Among the widely used attributes are spectral attributes calculated through time-frequency decomposition. Time-frequency spectral decomposition methods are employed to capture variations of a signal within the time-frequency domain. These decomposition methods generate a frequency vector at each time sample, referred to as the spectral component. The computed spectral component enables us to explore the additional frequency dimension which exists jointly with the original time dimension enabling localisation and characterisation of patterns within the seismic section. Conventional time-frequency decomposition methods include the continuous wavelet transform and the Wigner-Ville distribution. These methods suffer from challenges that hinder accurate interpretation when used for seismic interpretation. Continuous wavelet transform aims to decompose signals on a basis of elementary signals which have to be localised in time and frequency, but this method suffers from resolution and localisation limitations in the time-frequency spectrum. In addition to smearing, it often emerges from ill-localisation. The Wigner-Ville distribution distributes the energy of the signal over the two variables time and frequency and results in highly localised signal components. Yet, the method suffers from spurious cross-term interference due to its quadratic nature. This interference is misleading when the spectrum is used for interpretation purposes. For the specific application on seismic data the interference obscures geological features and distorts geophysical details. This thesis focuses on developing high fidelity and high-resolution time-frequency spectral decomposition methods as an extension to the existing conventional methods. These methods are then adopted as means to resolve seismic images for petroleum reservoirs. These methods are validated in terms of physics, robustness, and accurate energy localisation, using an extensive set of synthetic and real data sets including both carbonate and clastic reservoir settings. The novel contributions achieved in this thesis include developing time-frequency analysis algorithms for seismic data, allowing improved interpretation and accurate characterisation of petroleum reservoirs. The first algorithm established in this thesis is the Wigner-Ville distribution (WVD) with an additional masking filter. The standard WVD spectrum has high resolution but suffers the cross-term interference caused by multiple components in the signal. To suppress the cross-term interference, I designed a masking filter based on the spectrum of the smoothed-pseudo WVD (SP-WVD). The original SP-WVD incorporates smoothing filters in both time and frequency directions to suppress the cross-term interference, which reduces the resolution of the time-frequency spectrum. In order to overcome this side-effect, I used the SP-WVD spectrum as a reference to design a masking filter, and apply it to the standard WVD spectrum. Therefore, the mask-filtered WVD (MF-WVD) can preserve the high-resolution feature of the standard WVD while suppressing the cross-term interference as effectively as the SP-WVD. The second developed algorithm in this thesis is the synchrosqueezing wavelet transform (SWT) equipped with a directional filter. A transformation algorithm such as the continuous wavelet transform (CWT) might cause smearing in the time-frequency spectrum, i.e. the lack of localisation. The SWT attempts to improve the localisation of the time-frequency spectrum generated by the CWT. The real part of the complex SWT spectrum, after directional filtering, is capable to resolve the stratigraphic boundaries of thin layers within target reservoirs. In terms of seismic characterisation, I tested the high-resolution spectral results on a complex clastic reservoir interbedded with coal seams from the Ordos basin, northern China. I used the spectral results generated using the MF-WVD method to facilitate the interpretation of the sand distribution within the dataset. In another implementation I used the SWT spectral data results and the original seismic data together as the input to a deep convolutional neural network (dCNN), to track the horizons within a 3D volume. Using these application-based procedures, I have effectively extracted the spatial variation and the thickness of thinly layered sandstone in a coal-bearing reservoir. I also test the algorithm on a carbonate reservoir from the Tarim basin, western China. I used the spectrum generated by the synchrosqueezing wavelet transform equipped with directional filtering to characterise faults, karsts, and direct hydrocarbon indicators within the reservoir. Finally, I investigated pore-pressure prediction in carbonate layers. Pore-pressure variation generates subtle changes in the P-wave velocity of carbonate rocks. This suggests that existing empirical relations capable of predicting pore-pressure in clastic rocks are unsuitable for the prediction in carbonate rocks. I implemented the prediction based on the P-wave velocity and the wavelet transform multi-resolution analysis (WT-MRA). The WT-MRA method can unfold information within the frequency domain via decomposing the P-wave velocity. This enables us to extract and amplify hidden information embedded in the signal. Using Biot's theory, WT-MRA decomposition results can be divided into contributions from the pore-fluid and the rock framework. Therefore, I proposed a pore-pressure prediction model which is based on the pore-fluid contribution, calculated through WT-MRA, to the P-wave velocity.Open Acces

    Synchro-Transient-Extracting Transform for the Analysis of Signals with Both Harmonic and Impulsive Components

    Full text link
    Time-frequency analysis (TFA) techniques play an increasingly important role in the field of machine fault diagnosis attributing to their superiority in dealing with nonstationary signals. Synchroextracting transform (SET) and transient-extracting transform (TET) are two newly emerging techniques that can produce energy concentrated representation for nonstationary signals. However, SET and TET are only suitable for processing harmonic signals and impulsive signals, respectively. This poses a challenge for each of these two techniques when a signal contains both harmonic and impulsive components. In this paper, we propose a new TFA technique to solve this problem. The technique aims to combine the advantages of SET and TET to generate energy concentrated representations for both harmonic and impulsive components of the signal. Furthermore, we theoretically demonstrate that the proposed technique retains the signal reconstruction capability. The effectiveness of the proposed technique is verified using numerical and real-world signals

    High clarity speech separation using synchro extracting transform

    Get PDF
    Degenerate unmixing estimation technique (DUET) is the most ideal blind source separation (BSS) method for underdetermined conditions with number of sources exceeds number of mixtures. Estimation of mixing parameters which is the most critical step in the DUET algorithm, is developed based on the characteristic feature of sparseness of speech signals in time frequency (TF) domain. Hence, DUET relies on the clarity of time frequency representation (TFR) and even the slightest interference in the TF plane will be detrimental to the unmixing performance. In conventional DUET algorithm, short time Fourier transform (STFT) is utilized for extracting the TFR of speech signals. However, STFT can provide on limited sharpness to the TFR due to its inherent conceptual limitations, which worsens under noise contamination. This paper presents the application of post-processing techniques like synchro squeezed transform (SST) and synchro extracting transform (SET) to the DUET algorithm, to improve the TF resolution. The performance enhancement is evaluated both qualitatively and quantitatively by visual inspection, Renyi entropy of TFR and objective measures of speech signals. The results show enhancement in TF resolution and high clarity signal reconstruction. The method also provides adequate robustness to noise contamination

    The future of Bitcoin: a Synchrosqueezing Wavelet Transform to predict search engine query trends Contributions to KDWEB Conference, a.d. 2016

    Get PDF
    Abstract. In recent years search engines have become the go-to methods for achieving many types of knowledge, spanning from detailed descriptions or general information interesting to the user. Likewise several reassignment techniques are capturing the attention of researchers in the field of signal analysis. Particularly, the Synchrosqueezing Wavelet Transform -SST allows signal decomposition and instantaneous frequency extrusion, at the same time promising consistent reconstruction capabilities, hence the possibility to contrive an SST assisted inference engine. We are going to test it using datasets extracted from search engine trends, using a cloud of keywords related to the Bitcoin topic. This could be useful to study the evolution of the cryptocurrency both in time and geographical terms, and to estimate the future number of queries. The importance of Bitcoin queries prediction goes beyond the academic and research environments and, as such, it could lead to valuable commercial applications, such as financial recommender systems or blockchain-based transaction managers development

    Data-driven Signal Decomposition Approaches: A Comparative Analysis

    Full text link
    Signal decomposition (SD) approaches aim to decompose non-stationary signals into their constituent amplitude- and frequency-modulated components. This represents an important preprocessing step in many practical signal processing pipelines, providing useful knowledge and insight into the data and relevant underlying system(s) while also facilitating tasks such as noise or artefact removal and feature extraction. The popular SD methods are mostly data-driven, striving to obtain inherent well-behaved signal components without making many prior assumptions on input data. Among those methods include empirical mode decomposition (EMD) and variants, variational mode decomposition (VMD) and variants, synchrosqueezed transform (SST) and variants and sliding singular spectrum analysis (SSA). With the increasing popularity and utility of these methods in wide-ranging application, it is imperative to gain a better understanding and insight into the operation of these algorithms, evaluate their accuracy with and without noise in input data and gauge their sensitivity against algorithmic parameter changes. In this work, we achieve those tasks through extensive experiments involving carefully designed synthetic and real-life signals. Based on our experimental observations, we comment on the pros and cons of the considered SD algorithms as well as highlighting the best practices, in terms of parameter selection, for the their successful operation. The SD algorithms for both single- and multi-channel (multivariate) data fall within the scope of our work. For multivariate signals, we evaluate the performance of the popular algorithms in terms of fulfilling the mode-alignment property, especially in the presence of noise.Comment: Resubmission with changes in the reference lis

    Study on the seismic damage and dynamic support of roadway surrounding rock based on reconstructive transverse and longitudinal waves

    Get PDF
    The magnitude and frequency of induced seismicity increase as mining excavation reaches greater depth, leading to the increasingly severe damage to roadways caused by high-energy seismic waves. To comprehensively simulate the damage caused by dynamic loads, a synchrosqueezing transform and empirical mode decomposition method was developed, which effectively decomposed raw seismic wave signals into transverse and longitudinal components. This novel method produced more accurate results in terms of velocity, displacement, rock yielding patterns, and reflecting theoretically orthogonal oscillating directions of transverse and longitudinal waves compared to using raw mixed waves at the seismic source. Under the disturbance of transverse and longitudinal waves, the vertical displacement was much higher than horizontal displacement at the top position of the roadway, while the horizontal displacement was greater at the sidewalls. The particle vibration velocity, displacement and yielding zone of the surrounding rock of roadway were proportional to the energy level of seismic, while inversely proportional to the source-roadway distance. The proportion of damage attributed to transverse waves increased with the energy level, ranging from 75.8% to 85.8%. Eventually, a roadway dynamic support design was optimized based on the proposed seismic wave processing and modeling methodology. The methodology offers guidance for roadway dynamic support design, with the goal of averting excessive or insufficient support strength.Document Type: Original articleCited as: He, S., Shen, F., Chen, T., Mitri, H., Ren, T., Song, D. Study on the seismic damage and dynamic support of roadway surrounding rock based on reconstructive transverse and longitudinal waves. Advances in Geo-Energy Research, 2023, 9(3): 156-171. https://doi.org/10.46690/ager.2023.09.0

    Radon spectrogram-based approach for automatic IFs separation

    Get PDF
    The separation of overlapping components is a well-known and difficult problem in multicomponent signals analysis and it is shared by applications dealing with radar, biosonar, seismic, and audio signals. In order to estimate the instantaneous frequencies of a multicomponent signal, it is necessary to disentangle signal modes in a proper domain. Unfortunately, if signal modes supports overlap both in time and frequency, separation is only possible through a parametric approach whenever the signal class is a priori fixed. In this work, time-frequency analysis and Radon transform are jointly used for the unsupervised separation of modes of a generic frequency modulated signal in noisy environment. The proposed method takes advantage of the ability of the Radon transform of a proper time-frequency distribution in separating overlapping modes. It consists of a blind segmentation of signal components in Radon domain by means of a near-to-optimal threshold operation. The inversion of the Radon transform on each detected region allows us to isolate the instantaneous frequency curves of each single mode in the time-frequency domain. Experimental results performed on constant amplitudes chirp signals confirm the effectiveness of the proposed method, opening the way for its extension to more complex frequency modulated signals

    Learning Patterns with Kernels and Learning Kernels from Patterns

    Get PDF
    A major technique in learning involves the identification of patterns and their use to make predictions. In this work, we examine the symbiotic relationship between patterns and Gaussian process regression (GPR), which is mathematically equivalent to kernel interpolation. We introduce techniques where GPR can be used to learn patterns in denoising and mode (signal) decomposition. Additionally, we present the kernel flow (KF) algorithm which learns a kernels from patterns in the data with methodology inspired by cross validation. We further show how the KF algorithm can be applied to artificial neural networks (ANNs) to make improvements to learning patterns in images. In our denoising and mode decomposition examples, we show how kernels can be constructed to estimate patterns that may be hidden due to data corruption. In other words, we demonstrate how to learn patterns with kernels. Donoho and Johnstone proposed a near-minimax method for reconstructing an unknown smooth function u from noisy data u + ζ by translating the empirical wavelet coefficients of u + ζ towards zero. We consider the situation where the prior information on the unknown function u may not be the regularity of u, but that of ℒu where ℒ is a linear operator, such as a partial differential equation (PDE) or a graph Laplacian. We show that a near-minimax approximation of u can be obtained by truncating the ℒ-gamblet (operator-adapted wavelet) coefficients of u + ζ. The recovery of u can be seen to be precisely a Gaussian conditioning of u + ζ on measurement functions with length scale dependent on the signal-to-noise ratio. We next introduce kernel mode decomposition (KMD), which has been designed to learn the modes vi = ai(t)yi(θi(t)) of a (possibly noisy) signal Σivi when the amplitudes ai, instantaneous phases θi, and periodic waveforms yi may all be unknown. GPR with Gabor wavelet-inspired kernels is used to estimate ai, θi, and yi. We show near machine precision recovery under regularity and separation assumptions on the instantaneous amplitudes ai and frequencies &#729;θi. GPR and kernel interpolation require the selection of an appropriate kernel modeling the data. We present the KF algorithm, which is a numerical-approximation approach to this selection. The main principle the method utilizes is that a "good" kernel is able to make accurate predictions with small subsets of a training set. In this way, we learn a kernel from patterns. In image classification, we show that the learned kernels are able to classify accurately using only one training image per class and show signs of unsupervised learning. Furthermore, we introduce the combination of the KF algorithm with conventional neural-network training. This combination is able to train the intermediate-layer outputs of the network simultaneously with the final-layer output. We test the proposed method on Convolutional Neural Networks (CNNs) and Wide Residual Networks (WRNs) without alteration of their structure or their output classifier. We report reduced test errors, decreased generalization gaps, and increased robustness to distribution shift without significant increase in computational complexity relative to standard CNN and WRN training (with Drop Out and Batch Normalization). As a whole, this work highlights the interplay between kernel techniques with pattern recognition and numerical approximation.</p
    corecore