1,437 research outputs found

    Multiobjective Simulation Optimization Using Enhanced Evolutionary Algorithm Approaches

    Get PDF
    In today\u27s competitive business environment, a firm\u27s ability to make the correct, critical decisions can be translated into a great competitive advantage. Most of these critical real-world decisions involve the optimization not only of multiple objectives simultaneously, but also conflicting objectives, where improving one objective may degrade the performance of one or more of the other objectives. Traditional approaches for solving multiobjective optimization problems typically try to scalarize the multiple objectives into a single objective. This transforms the original multiple optimization problem formulation into a single objective optimization problem with a single solution. However, the drawbacks to these traditional approaches have motivated researchers and practitioners to seek alternative techniques that yield a set of Pareto optimal solutions rather than only a single solution. The problem becomes much more complicated in stochastic environments when the objectives take on uncertain (or noisy ) values due to random influences within the system being optimized, which is the case in real-world environments. Moreover, in stochastic environments, a solution approach should be sufficiently robust and/or capable of handling the uncertainty of the objective values. This makes the development of effective solution techniques that generate Pareto optimal solutions within these problem environments even more challenging than in their deterministic counterparts. Furthermore, many real-world problems involve complicated, black-box objective functions making a large number of solution evaluations computationally- and/or financially-prohibitive. This is often the case when complex computer simulation models are used to repeatedly evaluate possible solutions in search of the best solution (or set of solutions). Therefore, multiobjective optimization approaches capable of rapidly finding a diverse set of Pareto optimal solutions would be greatly beneficial. This research proposes two new multiobjective evolutionary algorithms (MOEAs), called fast Pareto genetic algorithm (FPGA) and stochastic Pareto genetic algorithm (SPGA), for optimization problems with multiple deterministic objectives and stochastic objectives, respectively. New search operators are introduced and employed to enhance the algorithms\u27 performance in terms of converging fast to the true Pareto optimal frontier while maintaining a diverse set of nondominated solutions along the Pareto optimal front. New concepts of solution dominance are defined for better discrimination among competing solutions in stochastic environments. SPGA uses a solution ranking strategy based on these new concepts. Computational results for a suite of published test problems indicate that both FPGA and SPGA are promising approaches. The results show that both FPGA and SPGA outperform the improved nondominated sorting genetic algorithm (NSGA-II), widely-considered benchmark in the MOEA research community, in terms of fast convergence to the true Pareto optimal frontier and diversity among the solutions along the front. The results also show that FPGA and SPGA require far fewer solution evaluations than NSGA-II, which is crucial in computationally-expensive simulation modeling applications

    Training deep neural density estimators to identify mechanistic models of neural dynamics

    Get PDF
    Mechanistic modeling in neuroscience aims to explain observed phenomena in terms of underlying causes. However, determining which model parameters agree with complex and stochastic neural data presents a significant challenge. We address this challenge with a machine learning tool which uses deep neural density estimators-- trained using model simulations-- to carry out Bayesian inference and retrieve the full space of parameters compatible with raw data or selected data features. Our method is scalable in parameters and data features, and can rapidly analyze new data after initial training. We demonstrate the power and flexibility of our approach on receptive fields, ion channels, and Hodgkin-Huxley models. We also characterize the space of circuit configurations giving rise to rhythmic activity in the crustacean stomatogastric ganglion, and use these results to derive hypotheses for underlying compensation mechanisms. Our approach will help close the gap between data-driven and theory-driven models of neural dynamics

    Normics: Proteomic Normalization by Variance and Data-Inherent Correlation Structure

    Get PDF
    Several algorithms for the normalization of proteomic data are currently available, each based on a priori assumptions. Among these is the extent to which differential expression (DE) can be present in the dataset. This factor is usually unknown in explorative biomarker screens. Simultaneously, the increasing depth of proteomic analyses often requires the selection of subsets with a high probability of being DE to obtain meaningful results in downstream bioinformatical analyses. Based on the relationship of technical variation and (true) biological DE of an unknown share of proteins, we propose the “Normics” algorithm: Proteins are ranked based on their expression level–corrected variance and the mean correlation with all other proteins. The latter serves as a novel indicator of the non-DE likelihood of a protein in a given dataset. Subsequent normalization is based on a subset of non-DE proteins only. No a priori information such as batch, clinical, or replicate group is necessary. Simulation data demonstrated robust and superior performance across a wide range of stochastically chosen parameters. Five publicly available spike-in and biologically variant datasets were reliably and quantitively accurately normalized by Normics with improved performance compared to standard variance stabilization as well as median, quantile, and LOESS normalizations. In complex biological datasets Normics correctly determined proteins as being DE that had been cross-validated by an independent transcriptome analysis of the same samples. In both complex datasets Normics identified the most DE proteins. We demonstrate that combining variance analysis and data-inherent correlation structure to identify non-DE proteins improves data normalization. Standard normalization algorithms can be consolidated against high shares of (one-sided) biological regulation. The statistical power of downstream analyses can be increased by focusing on Normics-selected subsets of high DE likelihood

    Evolution of sparsity and modularity in a model of protein allostery

    Full text link
    The sequence of a protein is not only constrained by its physical and biochemical properties under current selection, but also by features of its past evolutionary history. Understanding the extent and the form that these evolutionary constraints may take is important to interpret the information in protein sequences. To study this problem, we introduce a simple but physical model of protein evolution where selection targets allostery, the functional coupling of distal sites on protein surfaces. This model shows how the geometrical organization of couplings between amino acids within a protein structure can depend crucially on its evolutionary history. In particular, two scenarios are found to generate a spatial concentration of functional constraints: high mutation rates and fluctuating selective pressures. This second scenario offers a plausible explanation for the high tolerance of natural proteins to mutations and for the spatial organization of their least tolerant amino acids, as revealed by sequence analyses and mutagenesis experiments. It also implies a faculty to adapt to new selective pressures that is consistent with observations. Besides, the model illustrates how several independent functional modules may emerge within a same protein structure, depending on the nature of past environmental fluctuations. Our model thus relates the evolutionary history and evolutionary potential of proteins to the geometry of their functional constraints, with implications for decoding and engineering protein sequences

    ARTIFICIAL INTELLIGENCE IN PHARMACY DRUG DESIGN

    Get PDF
    Drug discovery is said to be a multi-dimensional issue in which different properties of drug candidates including efficacy, pharmacokinetics, and safety need to be improved with respect to giving the final drug product. Current advances in fields such as artificial intelligence (AI) systems that refine the design thesis through report investigation, microfluidics-assisted chemical synthesis, and biological testing are now giving a cornerstone for the establishment of greater automation into detail of this process. AI has stimulated computer-aided drug discovery. This could likely speed up time duration for compound discovery and enhancement and authorize more productive hunts of related chemicals. However, such optimization also increases substantial theories, technical, and organizational queries, as well as suspicion about the ongoing boost around them. Machine learning, in particular deep learning, in multiple scientific disciplines, and the development in computing hardware and software, among other factors, continue to power this development worldwide

    Reservoir characterization using seismic inversion data

    Get PDF
    Reservoir architecture may be inferred from analogs and geologic concepts, seismic surveys, and well data. Stochastically inverted seismic data are uninformative about meter-scale features, but aid downscaling by constraining coarse-scale interval properties such as total thickness and average porosity. Well data reveal detailed facies and vertical trends (and may indicate lateral trends), but cannot specify intrawell stratal geometry. Consistent geomodels can be generated for flow simulation by systematically considering the precision and density of different data. Because seismic inversion, conceptual stacking, and lateral variability of the facies are uncertain, stochastic ensembles of geomodels are needed to capture variability. In this research, geomodels integrate stochastic seismic inversions. At each trace, constraints represent means and variances for the inexact constraint algorithms, or can be posed as exact constraints. These models also include stratigraphy (a stacking framework from prior geomodels), well data (core and wireline logs to constrain meter-scale structure at the wells), and geostatistics (for correlated variability). These elements are combined in a Bayesian framework. This geomodeling process creates prior models with plausible bedding geometries and facies successions. These prior models of stacking are updated, using well and seismic data to generate the posterior model. Markov Chain Monte Carlo methods sample the posteriors. Plausible subseismic features are introduced into flow models, whilst avoiding overtuning to seismic data or conceptual geologic models. Fully integrated cornerpoint flow models are created, and methods for screening and simulation studies are discussed. The updating constraints on total thickness and average porosity need not be from a seismic survey: any spatially dense estimates of these properties may be used

    Tartarus: A Benchmarking Platform for Realistic And Practical Inverse Molecular Design

    Full text link
    The efficient exploration of chemical space to design molecules with intended properties enables the accelerated discovery of drugs, materials, and catalysts, and is one of the most important outstanding challenges in chemistry. Encouraged by the recent surge in computer power and artificial intelligence development, many algorithms have been developed to tackle this problem. However, despite the emergence of many new approaches in recent years, comparatively little progress has been made in developing realistic benchmarks that reflect the complexity of molecular design for real-world applications. In this work, we develop a set of practical benchmark tasks relying on physical simulation of molecular systems mimicking real-life molecular design problems for materials, drugs, and chemical reactions. Additionally, we demonstrate the utility and ease of use of our new benchmark set by demonstrating how to compare the performance of several well-established families of algorithms. Surprisingly, we find that model performance can strongly depend on the benchmark domain. We believe that our benchmark suite will help move the field towards more realistic molecular design benchmarks, and move the development of inverse molecular design algorithms closer to designing molecules that solve existing problems in both academia and industry alike.Comment: 29+21 pages, 6+19 figures, 6+2 table

    Numerical simulation and optimisation of IOR and EOR processes in high-resolution models for fractured carbonate reservoirs

    Get PDF
    Carbonate reservoirs contain more than half of the world’s conventional hydrocarbon resources. Hydrocarbon recovery in carbonates, however, is typically low, due to multi-scale geological heterogeneities that are a result of complex diagenetic, reactive, depositional and deformational processes. Improved Oil Recovery (IOR) and Enhanced Oil Recovery (EOR) methods are increasingly considered to maximise oil recovery and minimise field development costs. This is particularly important for carbonate reservoirs containing fractures networks, which can act as high permeability fluid flow pathways or impermeable barriers during interaction with the complex host rock matrix. In this thesis, three important contributions relating to EOR simulation and optimisation in fractured carbonate reservoirs are made using a high-resolution analogue reservoir model for the Arab D formation. First, a systematic approach is employed to investigate, analyse and increase understanding of the fundamental controls on fluid flow in heterogeneous carbonate systems using numerical well testing, secondary and tertiary recovery simulations. Secondly, the interplay between wettability, hysteresis and fracture-matrix exchange during combined CO2 EOR and sequestration is examined. Finally, data-driven surrogates, which construct an approximation of time-consuming numerical simulations, are used for rapid simulation and optimisation of EOR processes in fractured carbonate reservoirs while considering multiple geological uncertainty scenarios

    Power System State Estimation and Contingency Constrained Optimal Power Flow - A Numerically Robust Implementation

    Get PDF
    The research conducted in this dissertation is divided into two main parts. The first part provides further improvements in power system state estimation and the second part implements Contingency Constrained Optimal Power Flow (CCOPF) in a stochastic multiple contingency framework. As a real-time application in modern power systems, the existing Newton-QR state estimation algorithms are too slow and too fragile numerically. This dissertation presents a new and more robust method that is based on trust region techniques. A faster method was found among the class of Krylov subspace iterative methods, a robust implementation of the conjugate gradient method, called the LSQR method. Both algorithms have been tested against the widely used Newton-QR state estimator on the standard IEEE test networks. The trust region method-based state estimator was found to be very reliable under severe conditions (bad data, topological and parameter errors). This enhanced reliability justifies the additional time and computational effort required for its execution. The numerical simulations indicate that the iterative Newton-LSQR method is competitive in robustness with classical direct Newton-QR. The gain in computational efficiency has not come at the cost of solution reliability. The second part of the dissertation combines Sequential Quadratic Programming (SQP)-based CCOPF with Monte Carlo importance sampling to estimate the operating cost of multiple contingencies. We also developed an LP-based formulation for the CCOPF that can efficiently calculate Locational Marginal Prices (LMPs) under multiple contingencies. Based on Monte Carlo importance sampling idea, the proposed algorithm can stochastically assess the impact of multiple contingencies on LMP-congestion prices
    corecore