94 research outputs found
APPLICATION OF SPARSE DICTIONARY LEARNING TO SEISMIC DATA RECONSTRUCTION
According to the principle of compressed sensing (CS), under-sampled seismic data can be interpolated when the data becomes sparse in a transform domain. To sparsify the data, dictionary learning presents a data-driven approach trained to be optimized for each target dataset. This study presents an interpolation method for seismic data in which dictionary learning is employed to improve the sparsity of data representation using improved Kth Singular Value Decomposition (K-SVD). In this way, the transformation will be highly compatible with the input data, and the data in the converted domain will be sparser. In addition, the sampling matrix is produced with the restricted isometry property (RIP). To reduce the sensitivity of the minimizer term to the outliers, we use the smooth L1 minimizer as a regularization term in the regularized orthogonal matching pursuit (ROMP). We apply the proposed method to both synthetic and real seismic data. The results show that it can successfully reconstruct the missing seismic traces
Recommended from our members
Machine Learning and Bayesian Statistics for Seismic Compressive Sensing
Seismic surveys involve an artificial source of waves and a grid of receivers at the surface. Often, receivers could be missing either because they malfunctioned or could not be placed in certain locations. It could also be the fact that a local source of noise renders a receiver’s output as unusable. These gaps in the data cause problems in later stages of the seismic signal processing work flow via aliasing or incoherent noise and thus signal reconstruction is necessary. Modern algorithms utilise the principle of Compressive Sensing (CS) for reconstruction which uses the assumption that the signal of interest is either sparse in nature or in some other bases. Most algorithms are designed with the only aim to fill in gaps in the data without any consideration of learning bases or quantifying uncertainty in their predictions.
In this thesis, we approach the seismic CS problem using probabilistic data-driven models that are adaptable to seismic data. We propose to use algorithms from the Bayesian statistics and machine learning field that allow the construction of models using probability distributions over random variables. This allows the modelling of sparsity and provides flexibility by adding or removing basis functions from the model. It also provides the framework for learning new dictionaries of bases, associating uncertainty for each prediction and denoising seismic signals. More specifically, we utilise two Bayesian algorithms for seismic CS, the Relevance Vector Machine (RVM) and the Beta Process Factor Analysis (BPFA).
The RVM uses a sparsity promoting distribution over the coefficients of a linear combination of basis functions. By learning the appropriate parameters, the algorithm infers a predictive mean and predictive variance that is used for prediction of receivers’ values and uncertainty quantification. Experiments and comparisons on various seismic data show the effectiveness of the RVM with state-of-the-art reconstruction accuracy. Furthermore, its predictive variance is used along with modifications in order to create uncertainty maps with varying levels of correlation between uncertainty and respective reconstruction error of receivers.
On the other hand, BPFA uses an alternative approach to enforce sparsity providing exact zero coefficients as opposed to the RVM. Another advantage is that it also learns the bases from the available data and provides denoising of seismic signals. Experiments and comparisons on seismic data show that the BPFA obtains state-of-the-art reconstruction accuracy on various domains. In addition, the learned bases are used by other algorithms to improve their performance. An analysis of the BPFA’s inference procedure is given along with insights to reduce its computational cost. We also utilise the probabilistic nature of the BPFA and calculate the variance of the receivers’ predictions obtained during inference. Using this, we create uncertainty maps that are highly correlated with the reconstruction error, obtaining better results than the RVM’s predictive variance. Finally, an analysis of seismic signals with different levels of variance is undertaken in order to provide guidance for the best choice of algorithm per region.
The amount of seismic data available is growing, nevertheless quantity does not directly translate to quality. This creates the challenge to analyse and extract as much information and insight as possible. Using probabilistic data-driven models, we show how to achieve this by reconstructing seismic signals from under-sampled data, learn features from training data, denoise and create uncertainty maps for predictions in seismic surveys
Machine learning for seismic data analysis and processing
El aprendizaje automático está marcando el ritmo del avance del análisis de datos en muchos campos de la ciencia, la tecnología y la industria. En este contexto, el procesamiento y la inversión de datos sísmicos se abordan mediante estrategias que extraen la información relevante de los datos de forma casi automática. El “dictionary learning” y las Redes Neuronales son dos ejemplos comunes de algoritmos capaces de capturar las estructuras y patrones complejos incrustados en los datos e inferir o predecir cierta información de interés a partir de ellos. Utilizamos la técnica de “residual dictionary denoising” para atenuar la huella de adquisición en los datos sísmicos 3D. Además, demostramos algunos avances en el uso de una red neuronal profunda para invertir el tensor de momento sísmico en escenarios de monitorización de pozos. El aprendizaje automático también incluye técnicas de optimización global, como el recocido simulado y la evolución diferencial. Exploramos cómo estos dos algoritmos pueden automatizar procesos en la exploración sísmica, como el análisis de la velocidad y el “well-tying” que convencionalmente se hacen a mano y, por lo tanto, son susceptibles de la subjetividad y la experiencia del usuario.Machine learning is setting the pace in the advancement of data analysis in many fields of science, technology, and industry. In this context, seismic data processing and inversion are approached by strategies that extract the relevant information from the data almost automatically. Dictionary learning and neural networks are two common examples of algorithms capable of capturing the complex structures and patterns embedded in data and inferring or predicting certain information of interest from them. We use a residual dictionary denoising technique to attenuate the acquisition footprint in 3D seismic data. Besides, we demonstrate some progress in using a deep neural network to invert the seismic moment tensor in well-monitoring scenarios. Machine learning also includes global optimization techniques, such as simulated annealing and differential evolution. We explore how these two algorithms can automate processes in seismic exploration such as velocity analysis and well-tying, which are conventionally done by hand and are thus susceptible to user subjectivity and experience.Asociación Argentina de Geofísicos y Geodesta
Statistical and Graph-Based Signal Processing: Fundamental Results and Application to Cardiac Electrophysiology
The goal of cardiac electrophysiology is to obtain information about the mechanism, function, and performance of the electrical activities of the heart, the identification of deviation from normal pattern and the design of treatments. Offering a better insight into cardiac arrhythmias comprehension and management, signal processing can help the physician to enhance the treatment strategies, in particular in case of atrial fibrillation (AF), a very common atrial arrhythmia which is associated to significant morbidities, such as increased risk of mortality, heart failure, and thromboembolic events. Catheter ablation of AF is a therapeutic technique which uses radiofrequency energy to destroy atrial tissue involved in the arrhythmia sustenance, typically aiming at the electrical disconnection of the of the pulmonary veins triggers. However, recurrence rate is still very high, showing that the very complex and heterogeneous nature of AF still represents a challenging problem.
Leveraging the tools of non-stationary and statistical signal processing, the first part of our work has a twofold focus: firstly, we compare the performance of two different ablation technologies, based on contact force sensing or remote magnetic controlled, using signal-based criteria as surrogates for lesion assessment. Furthermore, we investigate the role of ablation parameters in lesion formation using the late-gadolinium enhanced magnetic resonance imaging. Secondly, we hypothesized that in human atria the frequency content of the bipolar signal is directly related to the local conduction velocity (CV), a key parameter characterizing the substrate abnormality and influencing atrial arrhythmias. Comparing the degree of spectral compression among signals recorded at different points of the endocardial surface in response to decreasing pacing rate, our experimental data demonstrate a significant correlation between CV and the corresponding spectral centroids.
However, complex spatio-temporal propagation pattern characterizing AF spurred the need for new signals acquisition and processing methods. Multi-electrode catheters allow whole-chamber panoramic mapping of electrical activity but produce an amount of data which need to be preprocessed and analyzed to provide clinically relevant support to the physician. Graph signal processing has shown its potential on a variety of applications involving high-dimensional data on irregular domains and complex network. Nevertheless, though state-of-the-art graph-based methods have been successful for many tasks, so far they predominantly ignore the time-dimension of data.
To address this shortcoming, in the second part of this dissertation, we put forth a Time-Vertex Signal Processing Framework, as a particular case of the multi-dimensional graph signal processing. Linking together the time-domain signal processing techniques with the tools of GSP, the Time-Vertex Signal Processing facilitates the analysis of graph structured data which also evolve in time. We motivate our framework leveraging the notion of partial differential equations on graphs. We introduce joint operators, such as time-vertex localization and we present a novel approach to significantly improve the accuracy of fast joint filtering. We also illustrate how to build time-vertex dictionaries, providing conditions for efficient invertibility and examples of constructions.
The experimental results on a variety of datasets suggest that the proposed tools can bring significant benefits in various signal processing and learning tasks involving time-series on graphs. We close the gap between the two parts illustrating the application of graph and time-vertex signal processing to the challenging case of multi-channels intracardiac signals
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
- …