6,349 research outputs found

    Modelling discrepancy in Bayesian calibration of reservoir models

    Get PDF
    Simulation models of physical systems such as oil field reservoirs are subject to numerous uncertainties such as observation errors and inaccurate initial and boundary conditions. However, after accounting for these uncertainties, it is usually observed that the mismatch between the simulator output and the observations remains and the model is still inadequate. This incapability of computer models to reproduce the real-life processes is referred to as model inadequacy. This thesis presents a comprehensive framework for modelling discrepancy in the Bayesian calibration and probabilistic forecasting of reservoir models. The framework efficiently implements data-driven approaches to handle uncertainty caused by ignoring the modelling discrepancy in reservoir predictions using two major hierarchical strategies, parametric and non-parametric hierarchical models. The central focus of this thesis is on an appropriate way of modelling discrepancy and the importance of the model selection in controlling overfitting rather than different solutions to different noise models. The thesis employs a model selection code to obtain the best candidate solutions to the form of non-parametric error models. This enables us to, first, interpolate the error in history period and, second, propagate it towards unseen data (i.e. error generalisation). The error models constructed by inferring parameters of selected models can predict the response variable (e.g. oil rate) at any point in input space (e.g. time) with corresponding generalisation uncertainty. In the real field applications, the error models reliably track down the uncertainty regardless of the type of the sampling method and achieve a better model prediction score compared to the models that ignore discrepancy. All the case studies confirm the enhancement of field variables prediction when the discrepancy is modelled. As for the model parameters, hierarchical error models render less global bias concerning the reference case. However, in the considered case studies, the evidence for better prediction of each of the model parameters by error modelling is inconclusive

    autoAx: An Automatic Design Space Exploration and Circuit Building Methodology utilizing Libraries of Approximate Components

    Full text link
    Approximate computing is an emerging paradigm for developing highly energy-efficient computing systems such as various accelerators. In the literature, many libraries of elementary approximate circuits have already been proposed to simplify the design process of approximate accelerators. Because these libraries contain from tens to thousands of approximate implementations for a single arithmetic operation it is intractable to find an optimal combination of approximate circuits in the library even for an application consisting of a few operations. An open problem is "how to effectively combine circuits from these libraries to construct complex approximate accelerators". This paper proposes a novel methodology for searching, selecting and combining the most suitable approximate circuits from a set of available libraries to generate an approximate accelerator for a given application. To enable fast design space generation and exploration, the methodology utilizes machine learning techniques to create computational models estimating the overall quality of processing and hardware cost without performing full synthesis at the accelerator level. Using the methodology, we construct hundreds of approximate accelerators (for a Sobel edge detector) showing different but relevant tradeoffs between the quality of processing and hardware cost and identify a corresponding Pareto-frontier. Furthermore, when searching for approximate implementations of a generic Gaussian filter consisting of 17 arithmetic operations, the proposed approach allows us to identify approximately 10310^3 highly important implementations from 102310^{23} possible solutions in a few hours, while the exhaustive search would take four months on a high-end processor.Comment: Accepted for publication at the Design Automation Conference 2019 (DAC'19), Las Vegas, Nevada, US

    Composite laminate tailoring with probabilistic constraints and loads

    Get PDF
    A reliability-based structural synthesis procedure was developed to tailor laminates to meet reliability-based (ply) strength requirements and achieve desirable laminate responses. The main thrust is to demonstrate how to integrate the optimization technique in the composite laminate tailoring process to meet reliability design requirements. The question of reliability arises in fiber composite analysis and design because of the inherent scatter that is observed in the constituent (fiber and matrix) material properties during experimentation. Symmetric and asymmetric composite laminates subject to mechanical loadings are considered as application examples. These application examples illustrate the effectiveness and ease with which reliability considerations can be integrated in the design optimization model for composite laminate tailoring

    Cooperative Navigation for Low-bandwidth Mobile Acoustic Networks.

    Full text link
    This thesis reports on the design and validation of estimation and planning algorithms for underwater vehicle cooperative localization. While attitude and depth are easily instrumented with bounded-error, autonomous underwater vehicles (AUVs) have no internal sensor that directly observes XY position. The global positioning system (GPS) and other radio-based navigation techniques are not available because of the strong attenuation of electromagnetic signals in seawater. The navigation algorithms presented herein fuse local body-frame rate and attitude measurements with range observations between vehicles within a decentralized architecture. The acoustic communication channel is both unreliable and low bandwidth, precluding many state-of-the-art terrestrial cooperative navigation algorithms. We exploit the underlying structure of a post-process centralized estimator in order to derive two real-time decentralized estimation frameworks. First, the origin state method enables a client vehicle to exactly reproduce the corresponding centralized estimate within a server-to-client vehicle network. Second, a graph-based navigation framework produces an approximate reconstruction of the centralized estimate onboard each vehicle. Finally, we present a method to plan a locally optimal server path to localize a client vehicle along a desired nominal trajectory. The planning algorithm introduces a probabilistic channel model into prior Gaussian belief space planning frameworks. In summary, cooperative localization reduces XY position error growth within underwater vehicle networks. Moreover, these methods remove the reliance on static beacon networks, which do not scale to large vehicle networks and limit the range of operations. Each proposed localization algorithm was validated in full-scale AUV field trials. The planning framework was evaluated through numerical simulation.PhDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113428/1/jmwalls_1.pd

    Artificial intelligence (AI) methods in optical networks: A comprehensive survey

    Get PDF
    Producción CientíficaArtificial intelligence (AI) is an extensive scientific discipline which enables computer systems to solve problems by emulating complex biological processes such as learning, reasoning and self-correction. This paper presents a comprehensive review of the application of AI techniques for improving performance of optical communication systems and networks. The use of AI-based techniques is first studied in applications related to optical transmission, ranging from the characterization and operation of network components to performance monitoring, mitigation of nonlinearities, and quality of transmission estimation. Then, applications related to optical network control and management are also reviewed, including topics like optical network planning and operation in both transport and access networks. Finally, the paper also presents a summary of opportunities and challenges in optical networking where AI is expected to play a key role in the near future.Ministerio de Economía, Industria y Competitividad (Project EC2014-53071-C3-2-P, TEC2015-71932-REDT

    Automated Design of Approximate Accelerators

    Get PDF
    In den letzten zehn Jahren hat das Bedürfnis nach Recheneffizienz die Entwicklung neuer Geräte, Architekturen und Entwurfstechniken motiviert. Approximate Computing hat sich als modernes, energieeffizientes Entwurfsparadigma für Anwendungen herausgestellt, die eine inhärente Fehlertoleranz aufweisen. Wenn die Genauigkeit der Ergebnisse in aktuellen Anwendungen wie Bildverarbeitung, Computer Vision und maschinellem Lernen auf ein akzeptables Maß reduziert wird, können Einsparungen im Schaltungsbereich, bei der Schaltkreisverzögerung und beim Stromverbrauch erzielt werden. Mit dem Aufkommen dieses Approximate Computing Paradigmas wurden in der Literatur viele approximierte Funktionseinheiten angegeben, insbesondere approximierte Addierer und Multiplizierer. Für eine Vielzahl solcher approximierter Schaltkreise und unter Berücksichtigung ihrer Verwendung als Bausteine für den Entwurf von approximierten Beschleunigern für fehlertolerante Anwendungen, ergibt sich eine Herausforderung: die Auswahl dieser approximierten Schaltkreise für eine bestimmte Anwendung, die die erforderlichen Ressourcen minimieren und gleichzeitig eine definierte Genauigkeit erfüllen. Diese Dissertation schlägt automatisierte Methoden zum Entwerfen und Implementieren von approximierten Beschleunigern vor, die aus approximierten arithmetischen Schaltungen aufgebaut sind. Um dies zu erreichen, befasst sich diese Dissertation mit folgenden Herausforderungen und liefert die nachfolgenden neuartigen Beiträge: In der Literatur wurden viele approximierte Addierer und Multiplizierer vorgestellt, indem entweder approximierte Entwürfe aus genauen Implementierungen wie dem Ripple-Carry-Addierer vorgeschlagen oder durch Approximate Logic Synthesis (ALS) Methoden generiert wurden. Ein repräsentativer Satz dieser approximierten Komponenten ist erforderlich, um approximierte Beschleuniger zu bauen. In diesem Sinne präsentiert diese Dissertation zwei Ansätze, um solche approximierte arithmetische Schaltungen zu erstellen. Zunächst wird AUGER vorgestellt, ein Tool, mit dem Register-Transfer Level (RTL) Beschreibungen für einen breiten Satz von approximierten Addierern und Multiplizierer für unterschiedliche Datenbitbreiten- und Genauigkeitskonfigurationen generiert werden können. Mit AUGER kann eine Design Space Exploration (DSE) von approximierten Komponenten durchgeführt werden, um diejenigen zu finden, die für eine gegebene Bitbreite, einen gegebenen Approximationsbereich und eine gegebene Schaltungsmetrik Pareto-optimal sind. Anschließend wird AxLS vorgestellt, ein Framework für ALS, das die Implementierung modernster Methoden und den Vorschlag neuartiger Methoden ermöglicht, um strukturelle Netzlistentransformationen durchzuführen und approximierte arithmetische Schaltungen aus genauen Schaltungen zu generieren. Darüber hinaus bieten beide Werkzeuge eine Fehlercharakterisierung in Form einer Fehlerverteilung und Schaltungseigenschaften (Fläche, Schaltkreisverzögerung und Leistung) für jede von ihnen erzeugte approximierte Schaltung. Diese Informationen sind für das Untersuchungsziel dieser Dissertation von wesentlicher Bedeutung. Trotz der Fehlertoleranz müssen approximierte Beschleuniger so ausgelegt sein, dass sie Genauigkeitsvorgaben erfüllen. Für den Entwurf solcher Beschleuniger unter Verwendung von approximierten arithmetischen Schaltungen ist es daher unerlässlich zu bewerten, wie sich die durch approximierte Schaltungen verursachten Fehler durch andere Berechnungen ausbreiten, entweder genau oder ungenau, und sich schließlich am Ausgang ansammeln. Diese Dissertation schlägt analytische Modelle vor, um die Fehlerpropagation durch genaue und approximierte Berechnungen zu beschreiben. Mit ihnen wird eine automatisierte, compilerbasierte Methodik vorgeschlagen, um die Fehlerpropagation auf approximierten Beschleunigerdesigns abzuschätzen. Diese Methode ist in ein Tool, CEDA, integriert, um schnelle, simulationsfreie Genauigkeitsschätzungen von approximierten Beschleunigermodellen durchzuführen, die unter Verwendung von C-Code beschrieben wurden. Beim Entwurf von approximierten Beschleunigern benötigen sich wiederholende Simulationen auf Gate-Level und die Schaltungssynthese viel Zeit, um viele oder sogar alle möglichen Kombinationen für einen gegebenen Satz von approximierten arithmetischen Schaltungen zu untersuchen. Andererseits basieren aktuelle Trends beim Entwerfen von Beschleunigern auf High-Level Synthesis (HLS) Werkzeugen. In dieser Dissertation werden analytische Modelle zur Schätzung der erforderlichen Rechenressourcen vorgestellt, wenn approximierte Addierer und Multiplizierer in Konstruktionen von approximierten Beschleunigern verwendet werden. Darüber hinaus werden diese Modelle zusammen mit den vorgeschlagenen analytischen Modellen zur Genauigkeitsschätzung in eine DSE-Methodik für fehlertolerante Anwendungen, DSEwam, integriert, um Pareto-optimale oder nahezu Pareto-optimale Lösungen für approximierte Beschleuniger zu identifizieren. DSEwam ist in ein HLS-Tool integriert, um automatisch RTL-Beschreibungen von approximierten Beschleunigern aus C-Sprachbeschreibungen für eine bestimmte Fehlerschwelle und ein bestimmtes Minimierungsziel zu generieren. Die Verwendung von approximierten Beschleunigern muss sicherstellen, dass Fehler, die aufgrund von approximierten Berechnungen erzeugt werden, innerhalb eines definierten Maximalwerts für eine gegebene Genauigkeitsmetrik bleiben. Die Fehler, die durch approximierte Beschleuniger erzeugt werden, hängen jedoch von den Eingabedaten ab, die hinsichtlich der für das Design verwendeten Daten unterschiedlich sein können. In dieser Dissertation wird ECAx vorgestellt, eine automatisierte Methode zur Untersuchung und Anwendung feinkörniger Fehlerkorrekturen mit geringem Overhead in approximierten Beschleunigern, um die Kosten für die Fehlerkorrektur auf Softwareebene (wie es in der Literatur gemacht wird) zu senken. Dies erfolgt durch selektive Korrektur der signifikantesten Fehler (in Bezug auf ihre Größenordnung), die von approximierten Komponenten erzeugt werden, ohne die Vorteile der Approximationen zu verlieren. Die experimentelle Auswertung zeigt Beschleunigungsverbesserungen für die Anwendung im Austausch für einen leicht gestiegenen Flächen- und Leistungsverbrauch im approximierten Beschleunigerdesign

    Metodologia para a estimativa da chance de sucesso de um projeto de sísmica 4D do ponto de vista da engenharia de reservatórios

    Get PDF
    Orientador: Denis José SchiozerTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica e Instituto de GeociênciasResumo: A produção de hidrocarbonetos é um negócio que envolve muitos riscos. As incertezas inerentes à produção estão relacionadas às incertezas no estado físico do reservatório e variáveis externas. A incerteza do reservatório pode ser reduzida conforme dados de produção e dinâmicos são adquiridos. A sísmica 4D (S4D) tem sido utilizada na indústria de petróleo, pois a integração de informação geofísica e de engenharia aumenta a capacidade preditiva da simulação de reservatórios. Entretanto, há questões técnicas que devem ser avaliadas antes de se iniciar um projeto de S4D. Vários estudos geofísicos usam o conceito de chance de sucesso para identificar os casos favoráveis onde são avaliados o levantamento sísmico e a magnitude das mudanças sísmicas. Porém, do ponto de vista de engenharia é importante avaliar o impacto da nova informação na operação do campo e o consequente benefício financeiro. A estimativa da chance de sucesso de um projeto de S4D é um desafio. Portanto, este trabalho apresenta uma metodologia que estima a chance de sucesso sob a perspectiva da engenharia de reservatórios. A metodologia foi desenvolvida em três fases. A primeira fase mostra que o erro de saturação de água pode ser utilizado para medir a melhora no entendimento da movimentação de fluidos no reservatório devido à aquisição da S4D. Além disso, mostra que o momento em que a sísmica 4D é adquirida impacta no valor da informação. Na segunda fase a metodologia para determinar o melhor momento para a aquisição da S4D é apresentada. O melhor momento é determinado avaliando o tempo para a chegada de água nos poços e as curvas de erro de saturação. Por fim, a metodologia para a estimativa da chance de sucesso é apresentada. A metodologia é um processo iterativo simples. A metodologia é composta por seis etapas, no qual algumas são bem estabelecidas na literatura. A tese incorpora a data que aquisição da sísmica 4D no processo e avalia a chance de sucesso por meio da variação do beneficio econômico ocasionado pelas incertezas do reservatório. A metodologia foi aplicada para um caso sintético para ilustrar o procedimento do cálculo do valor da informação e da probabilidade de sucessoAbstract: Production of hydrocarbons is a high-risk business. The uncertainties inherent to production are related to the uncertainties in the physical state of the reservoir and external variables. Reservoir uncertainty can be reduced as new production and dynamic data become available. 4D seismic technology has been used in the petroleum industry because the integration of geophysics and engineering information increases the predictive capability of reservoir simulations. However, there are technical issues to be addressed before starting a 4D seismic project. Several geophysical studies use the chance of success concept to identify the favorable cases; evaluating the seismic survey and the magnitude of seismic changes. From the engineering point of view, it is important to evaluate the impact of new information on field operations and the consequent monetary benefit. The estimation of 4D seismic data chance of success before its acquisition is a challenge. Therefore, the thesis presents a methodology to estimate the chance of success of a 4D seismic project from the reservoir engineering perspective. The methodology was developed in three phases. The first phase shows that water saturation error can measure the improvement on the fluid behavior understanding due to 4D seismic data. Moreover, it shows that the time for 4D seismic data acquisition affects its value. The second phase presents the methodology to estimate the best time to acquire 4D seismic data. The best time estimation is determined by evaluating time for water breakthrough and the water saturation error curves. Finally, the chance of success methodology is presented. The methodology is simple and an iterative process. It is divided in six steps, in which some of them are well established in the literature. The thesis incorporates the date of 4D seismic data acquisition in the process and assesses the chance of success through the variation in the economic benefit caused by the reservoir uncertainties. The methodology was applied to a synthetic reservoir model, showing a procedure to estimate the expected value of information and the probability of successDoutoradoReservatórios e GestãoDoutora em Ciências e Engenharia de Petróle

    Novel methods for active reservoir monitoring and flow rate allocation of intelligent wells

    Get PDF
    The value added by intelligent wells (I-wells) derives from real-time, reservoir and production performance monitoring together with zonal, downhole flow control. Unfortunately, downhole sensors that can directly measure the zonal flow rates and phase cuts required for optimal control of the well’s producing zones are not normally installed. Instead, the zonal, Multi-phase Flow Rates (MPFRs) are calculated from indirect measurements (e.g. from zonal pressures, temperatures and the total well flow rate), an approach known as soft-sensing. To-date all published techniques for zonal flow rate allocation in multi-zone I-wells are “passive” in that they calculate the required parameters to estimate MPFRs for a fixed given configuration of the completion. These techniques are subject to model error, but also to errors stemming from measurement noise when there is insufficient data duplication for accurate parameter estimation. This thesis describes an “active” soft-sensing technique consisting of two sequential optimisation steps. First step calculates MPFRs while the second one uses a direct search method based on Deformed Configurations to optimise the sequence of Interval Control Valve positions during a routine multi-rate test in an I-well. This novel approach maximises the accuracy of the calculated reservoir properties and MPFRs. Four “active monitoring” levels are discussed. Each one uses a particular combination of available indirect measurements from well performance monitoring systems. Level one is the simplest, requiring a minimal amount of well data. The higher levels require more data; but provide, in return, a greater understanding of produced fluids volumes and the reservoir’s properties at both a well and a zonal level. Such estimation of the reservoir parameters and MPFRs in I-wells is essential for effective well control strategies to optimise the production volumes. An integrated, control and monitoring (ICM) workflow is proposed which employs the active soft-sensing algorithm modified to maximise I-well oil production via real-time zonal production control based on estimates of zonal reservoir properties and their updates. Analysis of convergence rate of ICM workflow to optimise different objective functions shows that very accurate zonal properties are not required to optimise the oil production. The proposed reservoir monitoring and MPFR allocation workflow may also be used for designing in-well monitoring systems i.e. to predict which combination of sensors along with their measurement quality is required for effective well and reservoir monitoring
    corecore