374 research outputs found
Physics-driven Deep Learning Inversion for Direct Current Resistivity Survey Data
The direct-current (DC) resistivity method is a commonly used geophysical
technique for surveying adverse geological conditions. Inversion can
reconstruct the resistivity model from data, which is an important step in the
geophysical survey. However, the inverse problem is a serious ill-posed problem
that makes it easy to obtain incorrect inversion results. Deep learning (DL)
provides new avenues for solving inverse problems, and has been widely studied.
Currently, most DL inversion methods for resistivity are purely data-driven and
depend heavily on labels (real resistivity models). However, real resistivity
models are difficult to obtain through field surveys. An inversion network may
not be effectively trained without labels. In this study, we built an
unsupervised learning resistivity inversion scheme based on the physical law of
electric field propagation. First, a forward modeling process was embedded into
the network training, which converted the predicted model to predicted data and
formed a data misfit to the observation data. Unsupervised training independent
of the real model was realized using the data misfit as a loss function.
Moreover, a dynamic smoothing constraint was imposed on the loss function to
alleviate the ill-posed inverse problem. Finally, a transfer learning scheme
was applied to adapt the trained network with simulated data to field data.
Numerical simulations and field tests showed that the proposed method can
accurately locate and depict geological targets
Seismic Inversion and Uncertainty Analysis using Transdimensional Markov Chain Monte Carlo Method
We use a transdimensional inversion algorithm, reversible jump MCMC (rjMCMC), in the seismic waveform inversion of post-stack and prestack data to characterize reservoir properties such as seismic wave velocity, density as well as impedance and then estimate uncertainty. Each seismic trace is inverted independently based on a layered earth model. The model dimensionality is defined as the number of the layers multiplied with the number of model parameters per layer. The rjMCMC is able to infer the number of model parameters from data itself by allowing it to vary in the iterative inversion process, converge to proper parameterization and prevent underparameterization and overparameterization. We also use rjMCMC to enhance uncertainty estimation since it can transdimensionally sample different model spaces of different dimensionalities and can prevent a biased sampling in only one space which may have a different dimensionality than that of the true model space. An ensemble of solutions from difference spaces can statistically reduce the bias for parameter estimation and uncertainty quantification. Inversion uncertainty is comprised of property uncertainty and location uncertainty. Our study revealed that the inversion uncertainty is correlated with the discontinuity of property in such a way that 1) a smaller discontinuity will induce a lower uncertainty in property at the discontinuity but also a higher uncertainty of the location of that discontinuity and 2) a larger discontinuity will induce a higher uncertainty in property at the discontinuity but also a higher ``certainty'' of the location of that discontinuity. Therefore, there is a trade-off between the property uncertainty and the location uncertainty. To our surprise, there is a lot of hidden information in the uncertainty result that we can actually take advantage of due to this trade-off effect. On the basis of our study using rjMCMC, we propose to use the inversion uncertainty as a novel attribute in an optimistic way to characterize the magnitude and the location of subsurface discontinuities and reflectors
Recommended from our members
Accelerating Materials Discovery with Machine Learning
As we enter the data age, ever-increasing amounts of human knowledge are being recorded in machine-readable formats.
This has opened up new opportunities to leverage data to accelerate scientific discovery.
This thesis focuses on how we can use historical and computational data to aid the discovery and development of new materials.
We begin by looking at a traditional materials informatics task -- elucidating the structure-function relationships of high-temperature cuprate superconductors.
One of the most significant challenges for materials informatics is the limited availability of relevant data.
We propose a simple calibration-based approach to estimate the apical and in-plane copper-oxygen distances from more readily available lattice parameter data to address this challenge for cuprate superconductors.
Our investigation uncovers a large, unexplored region of materials space that may yield cuprates with higher critical temperatures.
We propose two experimental avenues that may enable this region to be accessed.
Computational materials exploration is bottle-necked by our ability to provide input structures to feed our workflows.
Whilst \textit{ab-intio} structure identification is possible, it is computationally burdensome and we lack design rules for deciding where to target searches in high-throughput setups.
To address this, there is a need to develop tools that suggest promising candidates, enabling automated deployment and increased efficiency.
Machine learning models are well suited to this task, however, current approaches typically use hand-engineered inputs.
This means that their performance is circumscribed by the intuitions reflected in the chosen inputs.
We propose a novel way to formulate the machine learning task as a set regression problem over the elements in a material.
We show that our approach leads to higher sample efficiency than other well-established composition-based approaches.
Having demonstrated the ability of machine learning to aid in the selection of promising compound compositions, we next explore how useful machine learning might be for identifying fabrication routes.
Using a recently released data-mined data set of solid-state synthesis reactions, we design a two-stage model to predict the products of inorganic reactions.
We critically explore the performance of this model, showing that whilst the predictions fall short of the accuracy required to be chemically discriminative, the model provides valuable insights into understanding inorganic reactions.
Through careful investigation of the model's failure modes, we explore the challenges that remain in the construction of forward inorganic reaction prediction models and suggest some pathways to tackle the identified issues.
One of the principal ways that material scientists understand and categorise materials is in terms of their symmetries.
Crystal structure prototypes are assigned based on the presence of symmetrically equivalent sites known as Wyckoff positions.
We show that a powerful coarse-grained representation of materials structures can be constructed from the Wyckoff positions by discarding information about their coordinates within crystal structures.
One of the strengths of this representation is that it maintains the ability of structure-based methods to distinguish polymorphs whilst also allowing combinatorial enumeration akin to composition-based approaches.
We construct an end-to-end differentiable model that takes our proposed Wyckoff representation as input.
The performance of this approach is examined on a suite of materials discovery experiments showing that it leads to strong levels of enrichment in materials discovery tasks.
The research presented in this thesis highlights the promise of applying data-driven workflows and machine learning in materials discovery and development.
This thesis concludes by speculating about promising research directions for applying machine learning within materials discovery
A Review of Geophysical Modeling Based on Particle Swarm Optimization
This paper reviews the application of the algorithm particle swarm optimization (PSO) to perform stochastic inverse modeling of geophysical data. The main features of PSO are summarized, and the most important contributions in several geophysical felds are analyzed. The aim is to indicate the fundamental steps of the evolution of PSO methodologies that have been adopted to model the Earth’s subsurface and then to undertake a critical
evaluation of their benefts and limitations. Original works have been selected from the existing geophysical literature to illustrate successful PSO applied to the interpretation of electromagnetic (magnetotelluric and time-domain) data, gravimetric and magnetic data, self-potential, direct current and seismic data. These case studies are critically described and compared. In addition, joint optimization of multiple geophysical data sets by means of multi-objective PSO is presented to highlight the advantage of using a single solver that deploys Pareto optimality to handle diferent data sets without conficting solutions.
Finally, we propose best practices for the implementation of a customized algorithm from scratch to perform stochastic inverse modeling of any kind of geophysical data sets for the beneft of PSO practitioners or inexperienced researchers
Multi-agent control in local power supply systems with flexible generation and active electricity consumers
Relevance and description of the problem
An important problem is the unpredictability of the dispersed generation units as the main element. Modes are determined by metrological conditions, as well as the complexity of managing the processes of interaction of active consumers with the elements of the power system. There are new problems, first of all the deterioration of the situation in the power system and the emergence of such negative consequences as: deterioration of electricity supply, imbalance in the cross sections of the power system, the presence of significant reverse overload in local networks, which can become unmanageable.
Thus, there is a problem of interaction of management with the power system of a new "player" in the local electricity market as an active consumer. To do this, it is necessary to form and address optimization processes, active interaction of electricity consumers with the power system while maintaining the total benefit of the active consumer from optimizing their own schedule of electricity consumption and possible income from electricity sales with normal operation of the system.
Also, changing the traditional behavior of consumers to active needs in Ukraine to solve a number of problems. This is due to the fact that Ukraine's energy system as a whole is centralized, with high-capacity power plants, focused on classic traditional electricity consumers. They need to reconsider the participation of active consumers in creating a service system for the energy system, system integration into the network, both the most active consumers and sources of dispersed generation, or their combination as part of the МAS with the subsequent formation of units that integrate individual functions.
These problems can be solved by creating a problem-oriented multi-agent management system in local power supply systems, which allows the agency level to distribute the responsibilities of consumers in the electricity market, optimize their interaction with bilateral electricity flows to improve electricity quality, power supply quality and reliability stable operation of the power system.
Development and improvement of methods for integration of distributed generation sources into the power supply network within the problem of increasing the efficiency of flexible generation, improving the efficiency of distributed generation, creating a multi-agent control system using the potential of active consumers to optimize energy consumption and energy efficiency in local power systems. in general, it is an urgent scientific and technical task.
Purpose of research
Creation of scientific and applied bases of the organization of the local market of the electric power by development of theoretical bases, development of models, methods and means which provide the effective organization and interaction of elements of the market. Review new, additional issues that involve the use of prosumers at the local Smart Grid level.
Research method
The master's thesis is based on such a method as the Scientific research. Data and formulas were collected for further processing into a more specific answer to the question of a multiagen system. The question of the method of neural networks, which is a component of the multiagent system, is considered. Data on optimal energy consumption are considered and collected.
Object of research
Proces of functioning of local power supply systems with complex hierarchical management. Development and definition of algorithms for optimal functioning of local systems with flexible generation and active consumers. Development of visual graphs of electric load consumption, mathematical model, algorithms of optimal actions. Construction of the MAS model on neural networks and operation in local Smart Grid systems.
Subject of research
Modes of operation of local energy systems within the liberalized local energy markets and oriented approach according to the Smart Grid concept regarding the choice of optimization of optimal consumption due to an individual consumption algorithm for each technological object.
Scіentіfіc novelty
Based on my publications, some of the shortcomings in the modern power system have been identified. The scientific novelty is the optimization of the modern energy system based on multi-agent management (Smart grid). Apply a modern-modernized and scientific-technical method of neural networks for the optimal algorithm of energy consumption. Use generation as the best way to adjust power load schedules. Give examples of solving the problem of optimal energy consumption for technological processes and the system itself.
Development of a visual model of interaction between the main equipment of the active consumer, flexible generation, as well as the interaction between the active consumer and the power supply network.
Creation of a methodology for assessing the functioning of individual active consumer equipment in local power supply systems, and in the presence of flexible generation sources in general.
Improve the way of optimizing the mode of operation for specific types to get the maximum benefit for participants and choose their most effective modes of operation.
Research tasks
1. Conduct a literary review of scientific publications in accordance with the defined scientific direction using a library with open access to modern scientific databases.
2. To form a model of a local system that contains the interaction between agents, and to solve the problem of building an aggregator.
3. The interaction of several sources of flexible generation, to assess the possible occurrence of modes of operation of several installations operating simultaneously.
4. Analyze the issues of electromagnetic compatibility in order to eliminate power surges, possible overloads of power lines, stability and quality of electricity, power supply and evaluation of processes between agents of local networks.
5. To develop adapted methods and algorithms for MAS, which will be implemented in systems with flexible generation and active consumers. Organizational and technical methods of management of local systems on the basis of MAS, to develop algorithms of an estimation of perspective development of system from the point of view of multivariate and multiplicity of criteria.
6. To optimize organizational and technical processes in systems of complex hierarchical management in modern technical and economic conditions.
7. Consider optimizing a multi-agent network. the relationship between the neural network method and multiagent control. Algorithm for selecting optimal consumption using the neural network method. Give an example of a multi-agent system and a generation system for the optimal choice of electricity consumption. Illustration of visual diagrams, graphs, drawings.
Expected results of the study
Development of methods for flexible generation sources in local power systems, methods for determining the potential of the active consumer (AC) and based on algorithms for selecting the mode of operation.
Solving the optimization problem of functioning of local power supply systems, as a consequence of increasing the efficiency of choosing the places of optimal placement of flexible generation sources.
Improve decision-making algorithms in multi-agent systems for managing the behavior of active consumers by choosing the optimal mode of interaction based on the formation and solution of a complex optimization problem, which allows to develop problem-oriented algorithms for MAS
Integrated characterisation of mud-rich overburden sediment sequences using limited log and seismic data: Application to seal risk
Muds and mudstones are the most abundant sediments in sedimentary basins and can
control fluid migration and pressure. In petroleum systems, they can also act as source,
reservoir or seal rocks. More recently, the sealing properties of mudstones have been
used for nuclear waste storage and geological CO2 sequestration. Despite the growing
importance of mudstones, their geological modelling is poorly understood and clear
quantitative studies are needed to address 3D lithology and flow properties distribution
within these sediments. The key issues in this respect are the high degree of
heterogeneity in mudstones and the alteration of lithology and flow properties with time
and depth. In addition, there are often very limited field data (log and seismic), with
lower quality within these sediments, which makes the common geostatistical modelling
practices ineffective.
In this study we assess/capture quantitatively the flow-important characteristics of
heterogeneous mud-rich sequences based on limited conventional log and post-stack
seismic data in a deep offshore West African case study. Additionally, we develop a
practical technique of log-seismic integration at the cross-well scale to translate 3D
seismic attributes into lithology probabilities. The final products are probabilistic
multiattribute transforms at different resolutions which allow prediction of lithologies
away from wells while keeping the important sub-seismic stratigraphic and structural
flow features. As a key result, we introduced a seismically-driven risk attribute (so-called
Seal Risk Factor "SRF") which showed robust correspondence to the lithologies
within the seismic volume. High seismic SRFs were often a good approximation for
volumes containing a higher percentage of coarser-grained and distorted sediments, and
vice versa.
We believe that this is the first attempt at quantitative, integrated characterisation of
mud-rich overburden sediment sequences using log and seismic data. Its application on
modern seismic surveys can save days of processing/mapping time and can reduce
exploration risk by basing decisions on seal texture and lithology probabilities
Modeling and Simulation in Engineering
The general aim of this book is to present selected chapters of the following types: chapters with more focus on modeling with some necessary simulation details and chapters with less focus on modeling but with more simulation details. This book contains eleven chapters divided into two sections: Modeling in Continuum Mechanics and Modeling in Electronics and Engineering. We hope our book entitled "Modeling and Simulation in Engineering - Selected Problems" will serve as a useful reference to students, scientists, and engineers
Stochastic mapping for chemical plume source localization with application to autonomous hydrothermal vent discovery
Thesis (Ph. D.)--Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Mechanical Engineering; and the Woods Hole Oceanographic Institution), 2007.Includes bibliographical references (p. 313-325).This thesis presents a stochastic mapping framework for autonomous robotic chemical plume source localization in environments with multiple sources. Potential applications for robotic chemical plume source localization include pollution and environmental monitoring, chemical plant safety, search and rescue, anti-terrorism, narcotics control, explosive ordinance removal, and hydrothermal vent prospecting. Turbulent flows make the spatial relationship between the detectable manifestation of a chemical plume source, the plume itself, and the location of its source inherently uncertain. Search domains with multiple sources compound this uncertainty because the number of sources as well as their locations is unknown a priori. Our framework for stochastic mapping is an adaptation of occupancy grid mapping where the binary state of map nodes is redefined to denote either the presence (occupancy) or absence of an active plume source. A key characteristic of the chemical plume source localization problem is that only a few sources are expected in the search domain. The occupancy grid framework allows for both plume detections and non-detections to inform the estimated state of grid nodes in the map, thereby explicitly representing explored but empty portions of the domain as well as probable source locations.(cont.) However, sparsity in the expected number of occupied grid nodes strongly violates a critical conditional independence assumption required by the standard Bayesian recursive map update rule. While that assumption makes for a computationally attractive algorithm, in our application it results in occupancy grid maps that are grossly inconsistent with the assumption of a small number of occupied cells. To overcome this limitation, several alternative occupancy grid update algorithms are presented, including an exact solution that is computationally tractable for small numbers of detections and an approximate recursive algorithm with improved performance relative to the standard algorithm but equivalent computational cost. Application to hydrothermal plume data collected by the autonomous underwater vehicle ABE during vent prospecting operations in both the Pacific and Atlantic oceans verifies the utility of the approach. The resulting maps enable nested surveys for homing-in on seafloor vent sites to be carried out autonomously. This eliminates inter-dive processing, recharging of batteries, and time spent deploying and recovering the vehicle that would otherwise be necessary with survey design directed by human operators.by Michael V. Jakuba.Ph.D
Population-based algorithms for improved history matching and uncertainty quantification of Petroleum reservoirs
In modern field management practices, there are two important steps that shed light on a multimillion dollar investment. The first step is history matching where the simulation model is calibrated to reproduce the historical observations from the field. In this inverse problem, different geological and petrophysical properties may provide equally good history matches. Such diverse models are likely to show different production behaviors in future. This ties the history matching with the second step, uncertainty quantification of predictions. Multiple history matched models are essential for a realistic uncertainty estimate of the future field behavior. These two steps facilitate decision making and have a direct impact on technical and financial performance of oil and gas companies.
Population-based optimization algorithms have been recently enjoyed growing popularity for solving engineering problems. Population-based systems work with a group of individuals that cooperate and communicate to accomplish a task that is normally beyond the capabilities of each individual. These individuals are deployed with the aim to solve the problem with maximum efficiency.
This thesis introduces the application of two novel population-based algorithms for history matching and uncertainty quantification of petroleum reservoir models. Ant colony optimization and differential evolution algorithms are used to search the space of parameters to find multiple history matched models and, using a Bayesian framework, the posterior probability of the models are evaluated for prediction of reservoir performance.
It is demonstrated that by bringing latest developments in computer science such as ant colony, differential evolution and multiobjective optimization, we can improve the history matching and uncertainty quantification frameworks. This thesis provides insights into performance of these algorithms in history matching and prediction and develops an understanding of their tuning parameters. The research also brings a comparative study of these methods with a benchmark technique called Neighbourhood Algorithms. This comparison reveals the superiority of the proposed methodologies in various areas such as computational efficiency and match quality
Modelado de sistemas complejos mediante métodos de agrupamiento e hibridación de técnicas inteligentes
El presente trabajo de investigación aborda el estudio y desarrollo de un sistema de modelado híbrido que combina métodos de agrupamiento estándar, o clustering, con algoritmos de regresión. Con esta propuesta, se pretende dividir el problema de modelado de un sistema en un conjunto de modelos locales. De esta forma se pueden definir zonas con un comportamiento similar de un modo más preciso. Durante la etapa de regresión, se aplican varias técnicas sobre cada uno de los grupos, con el fin de lograr la mejor aproximación en los modelos locales obtenidos. Por tanto, el modelo híbrido estará formado por el conjunto de todos estos modelos. Esta novedosa propuesta permite obtener resultados altamente satisfactorios en todos los procesos reales en los que se ha aplicado.
El sistema desarrollado ha sido validado sobre tres supuestos reales diferentes. En el primero de ellos, el modelo híbrido se emplea para obtener o predecir el valor que debiera medir un sensor para poder realizar detección de fallos. La aplicación real utiliza la señal BIS, que se emplea para determinar el grado de hipnosis de un paciente sedado. En el segundo, el modelo propuesto se utiliza para crear un sensor virtual, obteniendo el valor de una variable a partir de otras. La aplicación real, en este caso, se desarrolla sobre un sensor para monitorizar el estado de carga de una batería. En el último caso, el modelo híbrido se usa para predecir valores de variables en un tiempo futuro, en instantes posteriores al de la ejecución del modelo. Como aplicación real para este caso, se trata de predecir el valor de la señal ANI empleada en operaciones quirúrgicas, que es un indicador del dolor que sufren los pacientes durante una intervención
- …