60 research outputs found

    Advanced BEM-based methodologies to identify and simulate wave fields in complex geostructures

    Get PDF
    To enhance the applicability of BEM for geomechanical modeling numerically optimized BEM models, hybrid FEM-BEM models, and parallel computation of seismic Full Waveform Inversion (FWI) in GPU are implemented. Inverse modeling of seismic wave propagation in inhomogeneous and heterogeneous half-plane is implemented in Boundary Element Method (BEM) using Particle Swarm Optimization (PSO). The Boundary Integral Equations (BIE) based on the fundamental solutions for homogeneous elastic isotropic continuum are modified by introducing mesh-dependent variables. The variables are optimized to obtain the site-specific impedance functions. The PSO-optimized BEM models have significantly improved the efficiency of BEM for seismic wave propagation in arbitrarily inhomogeneous and heterogeneous media. Similarly, a hybrid BEM-FEM approach is developed to evaluate the seismic response of a complex poroelastic soil region containing underground structures. The far-field semi-infinite geological region is modeled via BEM, while the near-field finite geological region is modeled via FEM. The BEM region is integrated into the global FEM system using an equivalent macro-finite-element. The model describes the entire wave path from the seismic source to the local site in a single hybrid model. Additionally, the computational efficiency of time domain FWI algorithm is enhanced by parallel computation in CPU and GPU

    Advances in Grid Computing

    Get PDF
    This book approaches the grid computing with a perspective on the latest achievements in the field, providing an insight into the current research trends and advances, and presenting a large range of innovative research papers. The topics covered in this book include resource and data management, grid architectures and development, and grid-enabled applications. New ideas employing heuristic methods from swarm intelligence or genetic algorithm and quantum encryption are considered in order to explain two main aspects of grid computing: resource management and data management. The book addresses also some aspects of grid computing that regard architecture and development, and includes a diverse range of applications for grid computing, including possible human grid computing system, simulation of the fusion reaction, ubiquitous healthcare service provisioning and complex water systems

    Health monitoring of tree-trunks using ground penetrating radar

    Get PDF
    Ground penetrating radar (GPR) is traditionally applied to smooth surfaces in which the assumption of halfspace is an adequate approximation that does not deviate much from reality. Nonetheless, using GPR for internal structure characterization of tree trunks requires measurements on an irregularly shaped closed curve. Typical hyperbola-fitting has no physical meaning in this new context since the reflection patterns are strongly associated to the shape of the tree trunk. Instead of a clinical hyperbola, the reflections give rise to complex-shaped patterns that are difficult to be analyzed even in the absence of clutter. In the current paper, a novel processing scheme is described that can interpret complex reflection patterns assuming a circular target subject to any arbitrary shaped surface. The proposed methodology can be applied using commercial handheld antennas in real-time avoiding computationally costly tomographic approaches that require the usage of custom-made bespoke antenna arrays. The validity of the current approach is illustrated both with numerical and real experiments

    A Machine Learning Based Fast Forward Solver for Ground Penetrating Radar with Application to Full Waveform Inversion

    Get PDF
    The simulation, or forward modeling, of ground penetrating radar (GPR) is becoming a more frequently used approach to facilitate the interpretation of complex real GPR data, and as an essential component of full-waveform inversion (FWI). However, general full-wave 3-D electromagnetic (EM) solvers, such as the ones based on the finite-difference time-domain (FDTD) method, are still computationally demanding for simulating realistic GPR problems. We have developed a novel near-real-time, forward modeling approach for GPR that is based on a machine learning (ML) architecture. The ML framework uses an innovative training method that combines a predictive principal component analysis technique, a detailed model of the GPR transducer, and a large data set of modeled GPR responses from our FDTD simulation software. The ML-based forward solver is parameterized for a specific GPR application, but the framework can be applied to many different classes of GPR problems. To demonstrate the novelty and computational efficiency of our ML-based GPR forward solver, we used it to carry out FWI for a common infrastructure assessment application--determining the location and diameter of reinforcement bars in concrete. We tested our FWI with synthetic and real data and found a good level of accuracy in determining the rebar location, size, and surrounding material properties from both data sets. The combination of the near-real-time computation, which is orders of magnitude less than what is achievable by traditional full-wave 3-D EM solvers, and the accuracy of our ML-based forward model is a significant step toward commercially viable applications of FWI of GPR

    Numerical enhancements and parallel GPU implementation of the TRACEO3D model

    Get PDF
    Underwater acoustic models provide a fundamental and e cient tool to parametrically investigate hypothesis and physical phenomena through varied environmental conditions of sound propagation underwater. In this sense, requirements for model predictions in a three-dimensional ocean waveguide are expected to become more relevant, and thus expected to become more accurate as the amount of available environmental information (water temperature, bottom properties, etc.) grows. However, despite the increasing performance of modern processors, models that take into account 3D propagation still have a high computational cost which often hampers the usage of such models. Thus, the work presented in this thesis investigates a solution to enhance the numerical and computational performance of the TRACEO3D Gaussian beam model, which is able to handle full three-dimensional propagation. In this context, the development of a robust method for 3D eigenrays search is addressed, which is fundamental for the calculation of a channel impulse response. A remarkable aspect of the search strategy was its ability to provide accurate values of initial eigenray launching angles, even dealing with nonlinearity induced by the complex regime propagation of ray bouncing on the boundaries. In the same way, a optimized method for pressure eld calculation is presented, that accounts for a large numbers of sensors. These numerical enhancements and optimization of the sequential version of TRACEO3D led to signi cant improvements in its performance and accuracy. Furthermore, the present work considered the development of parallel algorithms to take advantage of the GPU architecture, looking carefully to the inherent parallelism of ray tracing and the high workload of predictions for 3D propagation. The combination of numerical enhancements and parallelization aimed to achieve the highest performance of TRACEO3D. An important aspect of this research is that validation and performance assessment were carried out not only for idealized waveguides, but also for the experimental results of a tank scale experiment. The results will demonstrate that a remarkable performance was achieved without compromising accuracy. It is expected that the contributions and remarkable reduction in runtime achieved will certainly help to overcome some of the reserves in employing a 3D model for predictions of acoustic elds

    DMRF-UNet: A Two-Stage Deep Learning Scheme for GPR Data Inversion under Heterogeneous Soil Conditions

    Full text link
    Traditional ground-penetrating radar (GPR) data inversion leverages iterative algorithms which suffer from high computation costs and low accuracy when applied to complex subsurface scenarios. Existing deep learning-based methods focus on the ideal homogeneous subsurface environments and ignore the interference due to clutters and noise in real-world heterogeneous environments. To address these issues, a two-stage deep neural network (DNN), called DMRF-UNet, is proposed to reconstruct the permittivity distributions of subsurface objects from GPR B-scans under heterogeneous soil conditions. In the first stage, a U-shape DNN with multi-receptive-field convolutions (MRF-UNet1) is built to remove the clutters due to inhomogeneity of the heterogeneous soil. Then the denoised B-scan from the MRF-UNet1 is combined with the noisy B-scan to be inputted to the DNN in the second stage (MRF-UNet2). The MRF-UNet2 learns the inverse mapping relationship and reconstructs the permittivity distribution of subsurface objects. To avoid information loss, an end-to-end training method combining the loss functions of two stages is introduced. A wide range of subsurface heterogeneous scenarios and B-scans are generated to evaluate the inversion performance. The test results in the numerical experiment and the real measurement show that the proposed network reconstructs the permittivities, shapes, sizes, and locations of subsurface objects with high accuracy. The comparison with existing methods demonstrates the superiority of the proposed methodology for the inversion under heterogeneous soil conditions

    Deep learning processing and interpretation of ground penetrating radar data using a numerical equivalent of a real GPR transducer

    Get PDF
    Ground-Penetrating Radar (GPR) is a popular non-destructive electromagnetic (EM) technique that is used in diverse applications across different fields, most commonly geophysics and civil engineering. One of the most common applications of GPR is concrete scanning, where it is used to detect structural elements and support the assessment of its condition. However, in any GPR application, the data have no resemblance to the characteristics of targets of interest and a means of extracting information from the data regarding the targets is required. Interpreting the GPR data, to infer key properties of the subsurface and to locate the targets is a difficult and challenging task and is highly dependent on the processing of the data and the experience of the user. Traditional processing techniques have some drawbacks, which can lead to misinterpretations of the data in addition to the interpretation being subjective to the user. Machine learning (ML) has proven its ability to solve a variety of problems and map complex relationships and in recent years, is becoming an increasingly attractive option for solving GPR and other EM problems regarding processing and interpretation. Numerical modelling has been extensively used to understand the EM wave propagation and assist in the interpretation of GPR responses. If ML is combined with numerical modelling, efficient solutions to GPR problems can be acquired. This research focuses on developing a numerical equivalent of a commercial GPR transducer and utilising this model to produce realistic synthetic training data sets for deep learning applications. The numerical model is based on the high-frequency 2000 MHz "palm" antenna from Geophysical Survey Systems, Inc. (GSSI). This GPR system is mainly used for concrete scanning, where the targets are located close to the surface. Unknown antenna parameters were found using global optimisation by minimising the mismatch between synthetic and real responses. A very good match was achieved, demonstrating that the model can accurately replicate the behaviour of the real antenna which was further validated using a number of laboratory experiments. Real data were acquired using the GSSI transducer over a sandbox and reinforced concrete slabs and the same scenarios were replicated in the simulations using the antenna model, showing excellent agreement. The developed antenna model was used to generate synthetic data, which are similar to the true data, for two deep learning applications, trained entirely using synthetic data. The first deep learning application suggested in the present thesis is background response and properties prediction. Two coupled neural networks are trained to predict the background response given as input total GPR responses, perform background removal and subsequently use the predicted background response to predict its dielectric properties. The suggested scheme not only performs the background removal processing step, but also enables the velocity calculation of the EM wave propagating in a medium using the predicted permittivity value. The ML algorithm is evaluated using a number of synthetic and measured data demonstrating its efficiency and higher accuracy compared to traditional methods. Predicting a permittivity value per A-scan included in a B-scan results in a permittivity distribution, which is used along with background removal to perform reverse-time migration (RTM). The proposed RTM scheme proved to be superior when compared with the commonly used RTM schemes. The second application was a deep learning-based forward solver, which is used as part of a full-waveform inversion (FWI) framework. A neural network is trained to predict entire B-scans given certain model parameters as input for reinforced concrete slab scenarios. The network makes predictions in real time, reducing by orders of magnitude the computational time of FWI, which is usually coupled with an FDTD forward solver. Therefore, making FWI applicable to commercial computers without the need of high-performance computing (HPC). The results clearly illustrate that ML schemes can be implemented to solve GPR problems and highlight the importance of having a digital representation of a real transducer in the simulations

    Optimizing Image Reconstruction in Electrical Impedance Tomography

    Get PDF
    Tato disertační práce pojednává o optimalizaci algoritmů pro rekonstrukci obrazu neznámé měrné vodivosti z měřených dat pořízených elektrickou impedanční tomografií. Danou problematiku zde věcně vymezuje několik různých prvků, zejména pak stručný matematický popis dopředné a inverzní úlohy řešené různými přístupy, metodika měření a pořizování dat pro rekonstrukci a přehled dostupných numerických nástrojů. Uvedenou charakteristiku rozšiřuje rozbor optimalizací parametrů modelu ovlivňujících přesnost rekonstrukce, způsoby paralelního zpracování algoritmů a souhrn dostupných zařízení pro měření tomografických dat. Na základě získaných poznatků byla navržena optimalizace parametrů matematického modelu, která umožňuje jeho velmi přesný návrh dle měřených dat. V této souvislosti dochází ke snížení nejistoty rekonstrukce rozložení konduktivity. Pro zefektivnění procesu získávání dat bylo navrženo zařízení k automatizaci tomografie s důrazem na cenovou dostupnost a snížení nejistoty měření. V oblasti tvorby numerického modelu byly dále zkoumány možnosti užití otevřených a uzavřených domén pro různé metody regularizace a hrubost sítě, a to s ohledem na velikost chyby rekonstruované konduktivity a výpočetní náročnost. Součástí práce je také paralelizace subalgoritmů rekonstrukce s využitím vícejádrové grafické karty. Předložené výsledky mají přímý vliv na snížení nejistoty rekonstrukce (optimalizací počáteční hodnoty konduktivity, rozmístění elektrod a tvarové deformace domény, regularizačních metod a typu domén) a urychlení výpočtů paralelizací algoritmů, přičemž výzkum byl podpořen vlastním návrhem jednotky pro automatizaci tomografie.The thesis presents, analyzes, and discusses the optimization of algorithms that reconstruct images of unknown specific conductivity from data acquired via electrical impedance tomography. In this context, the author provides a brief mathematical description of the forward and inverse tasks solved by using diverse approaches, characterizes relevant measurement techniques and data acquisition procedures, and discusses available numerical tools. Procedurally, the initial working stages involved analyzing the methods for optimizing those parameters of the model that influence the reconstruction accuracy; demonstrating approaches to the parallel processing of the algorithms; and outlining a survey of available instruments to acquire the tomographic data. The obtained knowledge then yielded a process for optimizing the parameters of the mathematical model, thus allowing the model to be designed precisely, based on the measured data; such a precondition eventually reduced the uncertainty in reconstructing the specific conductivity distribution. When forming the numerical model, the author investigated the possibilities and overall impact of combining the open and closed domains with various regularization methods and mesh element scales, considering both the character of the conductivity reconstruction error and the computational intensity. A complementary task resolved within the broader scheme outlined above lay in parallelizing the reconstruction subalgorithms by using a multi-core graphics card. The results of the thesis are directly reflected in a reduced reconstruction uncertainty (through an optimization of the initial conductivity value, placement of the electrodes, and shape deformation of the domains) and accelerated computation via parallelized algorithms. The actual research benefited from an in-house designed automated tomography unit.
    corecore