291 research outputs found

    [Research activities in applied mathematics, fluid mechanics, and computer science]

    Get PDF
    This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, fluid mechanics, and computer science during the period April 1, 1995 through September 30, 1995

    Novel sampling techniques for reservoir history matching optimisation and uncertainty quantification in flow prediction

    Get PDF
    Modern reservoir management has an increasing focus on accurately predicting the likely range of field recoveries. A variety of assisted history matching techniques has been developed across the research community concerned with this topic. These techniques are based on obtaining multiple models that closely reproduce the historical flow behaviour of a reservoir. The set of resulted history matched models is then used to quantify uncertainty in predicting the future performance of the reservoir and providing economic evaluations for different field development strategies. The key step in this workflow is to employ algorithms that sample the parameter space in an efficient but appropriate manner. The algorithm choice has an impact on how fast a model is obtained and how well the model fits the production data. The sampling techniques that have been developed to date include, among others, gradient based methods, evolutionary algorithms, and ensemble Kalman filter (EnKF). This thesis has investigated and further developed the following sampling and inference techniques: Particle Swarm Optimisation (PSO), Hamiltonian Monte Carlo, and Population Markov Chain Monte Carlo. The inspected techniques have the capability of navigating the parameter space and producing history matched models that can be used to quantify the uncertainty in the forecasts in a faster and more reliable way. The analysis of these techniques, compared with Neighbourhood Algorithm (NA), has shown how the different techniques affect the predicted recovery from petroleum systems and the benefits of the developed methods over the NA. The history matching problem is multi-objective in nature, with the production data possibly consisting of multiple types, coming from different wells, and collected at different times. Multiple objectives can be constructed from these data and explicitly be optimised in the multi-objective scheme. The thesis has extended the PSO to handle multi-objective history matching problems in which a number of possible conflicting objectives must be satisfied simultaneously. The benefits and efficiency of innovative multi-objective particle swarm scheme (MOPSO) are demonstrated for synthetic reservoirs. It is demonstrated that the MOPSO procedure can provide a substantial improvement in finding a diverse set of good fitting models with a fewer number of very costly forward simulations runs than the standard single objective case, depending on how the objectives are constructed. The thesis has also shown how to tackle a large number of unknown parameters through the coupling of high performance global optimisation algorithms, such as PSO, with model reduction techniques such as kernel principal component analysis (PCA), for parameterising spatially correlated random fields. The results of the PSO-PCA coupling applied to a recent SPE benchmark history matching problem have demonstrated that the approach is indeed applicable for practical problems. A comparison of PSO with the EnKF data assimilation method has been carried out and has concluded that both methods have obtained comparable results on the example case. This point reinforces the need for using a range of assisted history matching algorithms for more confidence in predictions

    A Partitioning Approach for Parallel Simulation of AC-Radial Shipboard Power Systems

    Get PDF
    An approach to parallelize the simulation of AC-Radial Shipboard Power Systems (SPSs) using multicore computers is presented. Time domain simulations of SPSs are notoriously slow, due principally to the number of components, and the time-variance of the component models. A common approach to reduce the simulation run-time of power systems is to formulate the electrical network equations using modified nodal analysis, use Bergeron's travelling-wave transmission line model to create subsystems, and to parallelize the simulation using a distributed computer. In this work, an SPS was formulated using loop analysis, defining the subsystems using a diakoptics-based approach, and the simulation parallelized using a multicore computer. A program was developed in C# to conduct multithreaded parallel-sequential simulations of an SPS. The program first represents an SPS as a graph, and then partitions the graph. Each graph partition represents a SPS subsystem and is computationally balanced using iterative refinement heuristics. Once balanced subsystems are obtained, each SPS subsystem's electrical network equations are formulated using loop analysis. Each SPS subsystem is solved using a unique thread, and each thread is manually assigned to a core of a multicore computer. To validate the partitioning approach, performance metrics were created to assess the speed gain and accuracy of the partitioned SPS simulations. The simulation parameters swept for the performance metrics were the number of partitions, the number of cores used, and the time step increment. The results of the performance metrics showed adequate speed gains with negligible error. An increasing simulation speed gain was observed when the number of partitions and cores were augmented, obtaining maximum speed gains of <30x when using a quadcore computer. Results show that the speed gain is more sensitive to the number partitions than is to the number of cores. While multicore computers are suitable for parallel-sequential SPS simulations, increasing the number of cores does not contribute to the gain in speed as much as does partitioning. The simulation error increased with the simulation time step but did not influence the partitioned simulation results. The number of operations caused by protective devices was used to determine whether the simulation error introduced by partitioning SPS simulations produced a inconsistent system behavior. It is shown, for the time step sizes uses, that protective devices did not operate inadvertently, which indicates that the errors did not alter RMS measurement and, hence, were non-influential

    Time-domain Modeling of Light Matter Interactions in Active Plasmonic Metamaterials

    Get PDF
    Metamaterials are artificially engineered to obtain unprecedented electromagnetic control leading to new and exciting applications. In order to further the understanding of fundamental optical phenomena and explore the effects of dynamically changing media on light propagation, numerous modeling methods have been developed. Among them, due to the nature of transient, nonlinear, and impulsive behavior, the time domain modeling approach is viewed as the most viable method. In this work, we develop a time-domain model (method of finite-difference time-domain (FDTD)) of light matter interactions in active plasmonic metamaterials. In order to model the dispersion of plasmonic nanostructure in the time-domain, we introduce a generalized dispersive material model built on Padé approximants. The developed 3D FDTD solver is then applied to study several plasmonic nanostructures and metamaterials, such as metal-dielectric composite films, random nano-nets for transparent conducting electrodes, and a graphene photodetector enhanced by a fractal plasmonic metasurface. In addition to this we also developed a multi-physics time-domain model to investigate the properties of a silver nanohole array coated with Rhodamine-101 dye. With accurate modeling of the retrieved kinetic parameters, the simulated emission intensity shows clear lasing, which is in good agreement with our experimental measurements. By tracing the population inversion and polarization dynamics, the amplification and lasing regimes inside the nanohole cavity can be clearly distinguished. With the help of our systematic approach, we further the understanding of time-resolved physics in active plasmonic nanostructures with gain

    Semiannual progress report no. 1, 16 November 1964 - 30 June 1965

    Get PDF
    Summary reports of research in bioelectronics, electron streams and interactions, plasmas, quantum and optical electronics, radiation and propagation, and solid-state electronic

    Systematic Data Extraction in High-Frequency Electromagnetic Fields

    Get PDF
    The focus of this work is on the investigation of billiards with its statistical eigenvalue properties. Specifically, superconducting microwave resonators with chaotic characteristics are simulated and the eigenfrequencies that are needed for the statistical analysis are computed. The eigenfrequency analysis requires many (in order of thousands) eigenfrequencies to be calculated and the accurate determination of the eigenfrequencies has a crucial significance. Consequently, the research interests cover all aspects from accurate numerical calculation of many eigenvalues and eigenvectors up to application development in order to get good performance out of the programs for distributed-memory and shared-memory multiprocessors. Furthermore, this thesis provides an overview and detailed evaluation of the used numerical approaches for large-scale eigenvalue calculations with respect to the accuracy, the computational time, and the memory consumption. The first approach for an accurate eigenfrequency extraction takes into consideration the evaluated electric field computations in Time Domain (TD) of a superconducting resonant structure. Upon excitation of the cavity, the electric field intensity is recorded at different detection probes inside the cavity. Thereafter, Fourier analysis of the recorded signals is performed and by means of signal-processing and fitting techniques, the requested eigenfrequencies are extracted by finding the optimal model parameters in the least squares sense. The second numerical approach is based on a numerical computation of electromagnetic fields in Frequency Domain (FD) and further employs the Lanczos method for the eigenvalue determination. Namely, when utilizing the Finite Integration Technique (FIT) to solve an electromagnetic problem for a superconducting cavity, which enclosures excited electromagnetic fields, the numerical solution of a standard large-scale eigenvalue problem is considered. Accordingly, if the numerical solution of the same problem is treated by the Finite Element Method (FEM) based on curvilinear tetrahedrons, it yields to the generalized large-scale eigenvalue problem. Afterward, the desired eigenvalues are calculated with the direct solution of the large (generalized) eigenvalue formulations. For this purpose, the implemented Lanczos solvers combine two major ingredients: the Lanczos algorithm with polynomial filtering on the one hand and its parallelization on the other
    corecore