78 research outputs found

    Seismic Ray Impedance Inversion

    Get PDF
    This thesis investigates a prestack seismic inversion scheme implemented in the ray parameter domain. Conventionally, most prestack seismic inversion methods are performed in the incidence angle domain. However, inversion using the concept of ray impedance, as it honours ray path variation following the elastic parameter variation according to Snell’s law, shows the capacity to discriminate different lithologies if compared to conventional elastic impedance inversion. The procedure starts with data transformation into the ray-parameter domain and then implements the ray impedance inversion along constant ray-parameter profiles. With different constant-ray-parameter profiles, mixed-phase wavelets are initially estimated based on the high-order statistics of the data and further refined after a proper well-to-seismic tie. With the estimated wavelets ready, a Cauchy inversion method is used to invert for seismic reflectivity sequences, aiming at recovering seismic reflectivity sequences for blocky impedance inversion. The impedance inversion from reflectivity sequences adopts a standard generalised linear inversion scheme, whose results are utilised to identify rock properties and facilitate quantitative interpretation. It has also been demonstrated that we can further invert elastic parameters from ray impedance values, without eliminating an extra density term or introducing a Gardner’s relation to absorb this term. Ray impedance inversion is extended to P-S converted waves by introducing the definition of converted-wave ray impedance. This quantity shows some advantages in connecting prestack converted wave data with well logs, if compared with the shearwave elastic impedance derived from the Aki and Richards approximation to the Zoeppritz equations. An analysis of P-P and P-S wave data under the framework of ray impedance is conducted through a real multicomponent dataset, which can reduce the uncertainty in lithology identification.Inversion is the key method in generating those examples throughout the entire thesis as we believe it can render robust solutions to geophysical problems. Apart from the reflectivity sequence, ray impedance and elastic parameter inversion mentioned above, inversion methods are also adopted in transforming the prestack data from the offset domain to the ray-parameter domain, mixed-phase wavelet estimation, as well as the registration of P-P and P-S waves for the joint analysis. The ray impedance inversion methods are successfully applied to different types of datasets. In each individual step to achieving the ray impedance inversion, advantages, disadvantages as well as limitations of the algorithms adopted are detailed. As a conclusion, the ray impedance related analyses demonstrated in this thesis are highly competent compared with the classical elastic impedance methods and the author would like to recommend it for a wider application

    Separation of Gravity Anomaly Data considering Statistical Independence among Signals : Application to Severely Contaminated Data Obtained by Prototype Mobile Gravimeter

    Get PDF
    The ground motion (GM) characteristics are affected by local subsurface structure. Gravity method is one of the useful methods to know the information on subsurface structure. The gravity anomaly data obtained by gravity survey can be correlated with the lateral variation of subsurface rock densities. For gravity survey, spring type gravimeter has been used so far. This gravimeter gives accurate resolution but they are very expensive and diffcult to handle. Recently, Team Morikawa have developed a prototype mobile gravimeter that uses Force-Balanced (FB) accelerometer. This prototype is light weight, compact, easy to handle and inexpensive. It also offers the resolution that is good enough for preparing gravity map for subsurface modelling. However, unlike the conventional spring-type gravimeter, this newly developed FB gravimeter is highly sensitive to high frequency noise. The observed data by this gravimeter are easily contaminated by various kinds of disturbances in a small size carrier like engine vibration, carrier acceleration, wind velocity and carrier tilting accompanied by sensor drifts, electrical noise etc. The amplitudes of such noises can be upto 100,000 times larger than the gravity anomaly. In order to extract the gravity anomaly from such observation, data processing is essential. Conventionally, the data was observed in a large carrier (ship) on a more stable environment and the sensor was not sensitive to high frequency noise, so the noise contamination was not severe. The data processing techniques like low pass filtering and Second order statistics method (such as SOBI) were used. However, in case of severely contaminated data, low pass filtering might not be enough. SOBI is an advanced blind source separation (BSS) technique that separates source and noise blindly by exploiting the statistical property of data. It separates the target source by assuming that source and unwanted data are un-correlated at various time-lags. The gravity anomaly and other noises are generated from independent physical sources. It can be safely assumed that gravity anomaly and other data are independent but, it can not be strictly claimed that they have no correlation. So, further improvement than second order statistics method is desired. As a scheme of considering independence of signals to blind source separation, Independent Component Analysis (ICA) has been used in the field of BSS since 1990's. It separates the sources by maximizing the independence of linearly transformed observed signals. Both mixing matrix and source signals are identified when only the mixed data are available. Further, independence between signals has nothing to do with their amplitudes. The huge difference in amplitudes among gravity anomaly and noise does not affect their independence. So ICA is suitable for our purpose. ICA renders ambiguity in amplitude of separated signal but this problem has little significance in our case since an appropriate scalar multiple can be estimated with the help of information of gravity at few known points. Thus it is proposed to use ICA for separating gravity anomaly data from its mixture with several noises. The survey data is observed at Toyama bay, Japan. The National Institute of Advanced Industrial Science and Technology (AIST), Japan has provided the gravity map for the same place. This map is used to calculate the reference data that facilitates us to verify the performance of the proposed scheme. The prototype gravimeter consisted of group of sensors. Since ICA requires at least two sets of data, the major data obtained by Analog servo (VSE) was combined with data by other sensors as supplementary data. Following Team Morikawa's approach, the performance of various sensors are compared. The application of low pass filtering(LPF) as a pre-processing to ICA is realized to be important. The presence of high frequency noise in the data is found to be unfavourable for the separation of gravity anomaly data. Both SOBI and ICA work only after the application of LPF. The choice of an appropriate cut-off filter was also observed to affect the results. The combination of VSE data and vertical component of Accelerometer Titan (Taurus-Z) as an iput to ICA gives good result. When other horizontal components were used with VSE data the results are not satisfactory. Further, ICA is found to perform better at certain conditions of data acquisition environment. At the portions when ship motion is unidirectional the trend of ICA separated data is harmonious with reference data. When the ship velocity was lesser while proceeding towards the sea, the ICA result is matching very well with reference data. When the ship was highly unstable during ship stopping time ICA result are deviating away from the reference data. At other relatively stable sections the ICA separated data follows the trend of reference data well. The separation of input data by ICA into different output components verifies that the source gravity anomaly and other data are independent. Thus it satisfies our assumption. The harmony of ICA separated data with trend of reference data at major sections verifies the applicability of ICA, under certain data acquisition environments. The accuracy of properly separated data by ICA is good enough for preparing gravity map for the purpose of subsurface modelling. However, there is still a room for further improvement. An effort is made to study time-frequency characteristics of data without observing any clear merit so far. The further improvement in methodology is considered to be the part of future works. Based on the results and considering the applicability of ICA so far, it can be concluded that a positive sign is observed for the improvement of mobility of gravity method.報告番号: ; 学位授与年月日: 2012-09-27 ; 学位の種別: 修士 ; 学位の種類: 修士(工学) ; 学位記番号: ; 研究科・専攻: 工学系研究科社会基盤学専

    Joint inversion of seismic PP- and PS-waves in the ray parameter domain

    No full text
    Seismic inversion is a quantitative analysis technique in reservoir geophysics to reveal subsurface physical properties from surface-recorded seismic data. But the most widely used inversion in oil and gas exploration for decades is PP-wave based. P-to-S converted wave, which has shown great success in the imaging of gas clouds, has a different response to rocks and pore-fluids from the PP-wave. A joint use of the PS-wave and PP-wave in the inversion can reduce the ill-posedness of the inverse problem and in particular enables simultaneous inversion for three independent elastic parameters. Conventionally, prestack seismic inversion is based on the incidence angle-dependent reflection coefficients. In my research, I define the seismic reflections and impedances along the ray paths of wave propagation, and these ray paths obey Snell’s law. I adopt the ray-impedance concept, which is a frequency-dependent parameter and is sensitive to fluid contents. Joined interpretation of PP- and PS-wave ray impedances can identify reservoirs, and also has potential in fluid discrimination. Joint inversion of PP- and PS-waves is performed on the constant ray parameter (CRP) profiles. For a constant ray parameter, a pair of PP- and PS-wave traces has exactly the same ray path between the source and the reflection point, which means the PP- and PS-wave reflection events represent exactly the same reflection point, in the horizontal direction. Therefore, PP and PS-wave calibration transforms PS-wave reflection events from PS-wave time to the corresponding PP-wave time, and reflections events in a pair of PP- and calibrated PS-wave traces with a constant ray parameter should correspond to each other, sample by sample, both horizontally and vertically. I also present a procedure which preserves the original wavelets in the transformed PS-wave trace. I use a bending ray-tracing method to construct the common image point (CIP) gathers in the ray-parameter domain. I estimate mixed-phase wavelets for each constant ray-parameter (CRP) profile through a frequency domain high-order statistical method, and then invert for the reflectivity series using weighted constraints. From the reflectivity sections, I estimate PP- and PS-wave ray impedances separately and also estimate three elastic parameters simultaneously in a joint inversion. I have applied the entire procedure to a couple of field data sets, to verify the robustness and effectiveness of the method, and to demonstrate the great potential of joint inversion in ray-parameter domain

    An Examination of Some Signi cant Approaches to Statistical Deconvolution

    No full text
    We examine statistical approaches to two significant areas of deconvolution - Blind Deconvolution (BD) and Robust Deconvolution (RD) for stochastic stationary signals. For BD, we review some major classical and new methods in a unified framework of nonGaussian signals. The first class of algorithms we look at falls into the class of Minimum Entropy Deconvolution (MED) algorithms. We discuss the similarities between them despite differences in origins and motivations. We give new theoretical results concerning the behaviour and generality of these algorithms and give evidence of scenarios where they may fail. In some cases, we present new modifications to the algorithms to overcome these shortfalls. Following our discussion on the MED algorithms, we next look at a recently proposed BD algorithm based on the correntropy function, a function defined as a combination of the autocorrelation and the entropy functiosn. We examine its BD performance when compared with MED algorithms. We find that the BD carried out via correntropy-matching cannot be straightforwardly interpreted as simultaneous moment-matching due to the breakdown of the correntropy expansion in terms of moments. Other issues such as maximum/minimum phase ambiguity and computational complexity suggest that careful attention is required before establishing the correntropy algorithm as a superior alternative to the existing BD techniques. For the problem of RD, we give a categorisation of different kinds of uncertainties encountered in estimation and discuss techniques required to solve each individual case. Primarily, we tackle the overlooked cases of robustification of deconvolution filters based on estimated blurring response or estimated signal spectrum. We do this by utilising existing methods derived from criteria such as minimax MSE with imposed uncertainty bands and penalised MSE. In particular, we revisit the Modified Wiener Filter (MWF) which offers simplicity and flexibility in giving improved RDs to the standard plug-in Wiener Filter (WF)

    Signal processing techniques for the enhancement of marine seismic data

    Get PDF
    This thesis presents several signal processing techniques applied to the enhancement of marine seismic data. Marine seismic exploration provides an image of the Earth's subsurface from reflected seismic waves. Because the recorded signals are contaminated by various sources of noise, minimizing their effects with new attenuation techniques is necessary. A statistical analysis of background noise is conducted using Thomson’s multitaper spectral estimator and Parzen's amplitude density estimator. The results provide a statistical characterization of the noise which we use for the derivation of signal enhancement algorithms. Firstly, we focus on single-azimuth stacking methodologies and propose novel stacking schemes using either enhanced weighted sums or a Kalman filter. It is demonstrated that the enhanced methods yield superior results by their ability to exhibit cleaner and better defined reflected events as well as a larger number of reflections in deep waters. A comparison of the proposed stacking methods with existing ones is also discussed. We then address the problem of random noise attenuation and present an innovative application of sparse code shrinkage and independent component analysis. Sparse code shrinkage is a valuable method when a noise-free realization of the data is generated to provide data-driven shrinkages. Several models of distribution are investigated, but the normal inverse Gaussian density yields the best results. Other acceptable choices of density are discussed as well. Finally, we consider the attenuation of flow-generated nonstationary coherent noise and seismic interference noise. We suggest a multiple-input adaptive noise canceller that utilizes a normalized least mean squares alg orithm with a variable normalized step size derived as a function of instantaneous frequency. This filter attenuates the coherent noise successfully when used either by itself or in combination with a time-frequency median filter, depending on the noise spectrum and repartition along the data. Its application to seismic interference attenuation is also discussed

    Computer-aided detection and diagnosis of breast cancer in 2D and 3D medical imaging through multifractal analysis

    Get PDF
    This Thesis describes the research work performed in the scope of a doctoral research program and presents its conclusions and contributions. The research activities were carried on in the industry with Siemens S.A. Healthcare Sector, in integration with a research team. Siemens S.A. Healthcare Sector is one of the world biggest suppliers of products, services and complete solutions in the medical sector. The company offers a wide selection of diagnostic and therapeutic equipment and information systems. Siemens products for medical imaging and in vivo diagnostics include: ultrasound, computer tomography, mammography, digital breast tomosynthesis, magnetic resonance, equipment to angiography and coronary angiography, nuclear imaging, and many others. Siemens has a vast experience in Healthcare and at the beginning of this project it was strategically interested in solutions to improve the detection of Breast Cancer, to increase its competitiveness in the sector. The company owns several patents related with self-similarity analysis, which formed the background of this Thesis. Furthermore, Siemens intended to explore commercially the computer- aided automatic detection and diagnosis eld for portfolio integration. Therefore, with the high knowledge acquired by University of Beira Interior in this area together with this Thesis, will allow Siemens to apply the most recent scienti c progress in the detection of the breast cancer, and it is foreseeable that together we can develop a new technology with high potential. The project resulted in the submission of two invention disclosures for evaluation in Siemens A.G., two articles published in peer-reviewed journals indexed in ISI Science Citation Index, two other articles submitted in peer-reviewed journals, and several international conference papers. This work on computer-aided-diagnosis in breast led to innovative software and novel processes of research and development, for which the project received the Siemens Innovation Award in 2012. It was very rewarding to carry on such technological and innovative project in a socially sensitive area as Breast Cancer.No cancro da mama a deteção precoce e o diagnóstico correto são de extrema importância na prescrição terapêutica e caz e e ciente, que potencie o aumento da taxa de sobrevivência à doença. A teoria multifractal foi inicialmente introduzida no contexto da análise de sinal e a sua utilidade foi demonstrada na descrição de comportamentos siológicos de bio-sinais e até na deteção e predição de patologias. Nesta Tese, três métodos multifractais foram estendidos para imagens bi-dimensionais (2D) e comparados na deteção de microcalci cações em mamogramas. Um destes métodos foi também adaptado para a classi cação de massas da mama, em cortes transversais 2D obtidos por ressonância magnética (RM) de mama, em grupos de massas provavelmente benignas e com suspeição de malignidade. Um novo método de análise multifractal usando a lacunaridade tri-dimensional (3D) foi proposto para classi cação de massas da mama em imagens volumétricas 3D de RM de mama. A análise multifractal revelou diferenças na complexidade subjacente às localizações das microcalci cações em relação aos tecidos normais, permitindo uma boa exatidão da sua deteção em mamogramas. Adicionalmente, foram extraídas por análise multifractal características dos tecidos que permitiram identi car os casos tipicamente recomendados para biópsia em imagens 2D de RM de mama. A análise multifractal 3D foi e caz na classi cação de lesões mamárias benignas e malignas em imagens 3D de RM de mama. Este método foi mais exato para esta classi cação do que o método 2D ou o método padrão de análise de contraste cinético tumoral. Em conclusão, a análise multifractal fornece informação útil para deteção auxiliada por computador em mamogra a e diagnóstico auxiliado por computador em imagens 2D e 3D de RM de mama, tendo o potencial de complementar a interpretação dos radiologistas

    A Novel Pixel-based Multiple-Point Geostatistical Simulation Method for Stochastic Modeling of Earth Resources

    Get PDF
    Uncertainty is an integral part of modeling Earth\u27s resources and environmental processes. Geostatistical simulation technique is a well-established tool for uncertainty quantification of earth systems modeling. Multiple-point statistical (MPS) algorithms are specifically advantageous when dealing with the complexity and heterogeneity of geological data. MPS algorithms take advantage of using training images to mimic physical reality. This research presents a novel and efficient pixel-based multiple-point geostatistical simulation method for mineral resource modeling. Pixel-based simulation implies the sequential modeling of individual points on the simulation grid by borrowing spatial information from the training image and honoring conditioning data points. The developed method borrows information by integrating multiple machine learning algorithms, including Principal Component Analysis (PCA), t-Distributed Stochastic Neighbor Embedding (t-SNE), and Density-based Spatial Clustering of Applications with Noise (DBSCAN) algorithms. For automation and to ensure high-quality realizations, multiple optimizations, and parameter tuning strategies were introduced. The proposed methodology proved its applicability by accurate reproduction of complex geological features honoring conditioning data while maintaining reasonable computational time. The model is validated by simulating a variety of categorical and continuous variables for both two and three-dimensional cases and conditional and unconditional simulations. As a three-dimensional case study for categorical stochastic modeling, the proposed method is applied to a gold deposit for orebody modeling. The proposed algorithm can be applied to a variety of contexts, including but not limited to petroleum reservoir characterization, seismic inversion, mineral resources modeling, gap-filling in remote sensing, and climate modeling. The developed model can be extended for spatio-temporal modeling, multivariate simulation, non-stationary modeling, and super-resolution realizations

    Condition Monitoring and Fault Diagnosis of Roller Element Bearing

    Get PDF
    Rolling element bearings play a crucial role in determining the overall health condition of a rotating machine. An effective condition-monitoring program on bearing operation can improve a machine’s operation efficiency, reduce the maintenance/replacement cost, and prolong the useful lifespan of a machine. This chapter presents a general overview of various condition-monitoring and fault diagnosis techniques for rolling element bearings in the current practice and discusses the pros and cons of each technique. The techniques introduced in the chapter include data acquisition techniques, major parameters used for bearing condition monitoring, signal analysis techniques, and bearing fault diagnosis techniques using either statistical features or artificial intelligent tools. Several case studies are also presented in the chapter to exemplify the application of these techniques in the data analysis as well as bearing fault diagnosis and pattern recognition

    Enhancing the performance of spread spectrum techniques in different applications

    Get PDF
    Spread spectrum, Automotive Radar, Indoor Positioning Systems, Ultrasonic and Microwave Imaging, super resolution technique and wavelet transformMagdeburg, Univ., Fak. für Elektrotechnik und Informationstechnik, Diss., 2006von Omar Abdel-Gaber Mohamed Al
    corecore