402 research outputs found

    A steady-state genetic algorithm with resampling for noisy inventory control

    Get PDF
    Noisy fitness functions occur in many practical applications of evolutionary computation. A standard technique for solving these problems is fitness resampling but this may be inefficient or need a large population, and combined with elitism it may overvalue chromosomes or reduce genetic diversity. We describe a simple new resampling technique called Greedy Average Sampling for steady-state genetic algorithms such as GENITOR. It requires an extra runtime parameter to be tuned, but does not need a large population or assumptions on noise distributions. In experiments on a well-known Inventory Control problem it performed a large number of samples on the best chromosomes yet only a small number on average, and was more effective than four other tested technique

    Neuroevolutionary inventory control in multi-echelon systems

    Get PDF

    Statistical Testing of Optimality Conditions in Multiresponse Simulation-Based Optimization (Replaced by Discussion Paper 2007-45)

    Get PDF
    This paper derives a novel procedure for testing the Karush-Kuhn-Tucker (KKT) first-order optimality conditions in models with multiple random responses.Such models arise in simulation-based optimization with multivariate outputs.This paper focuses on expensive simulations, which have small sample sizes.The paper estimates the gradients (in the KKT conditions) through low-order polynomials, fitted locally.These polynomials are estimated using Ordinary Least Squares (OLS), which also enables estimation of the variability of the estimated gradients.Using these OLS results, the paper applies the bootstrap (resampling) method to test the KKT conditions.Furthermore, it applies the classic Student t test to check whether the simulation outputs are feasible, and whether any constraints are binding.The paper applies the new procedure to both a synthetic example and an inventory simulation; the empirical results are encouraging.stopping rule;metaheuristics;RSM;design of experiments

    Large scale optimization of a sour water stripping plant using surrogate models

    Get PDF
    In this work, we propose a new methodology for the large scale optimization and process integration of complex chemical processes that have been simulated using modular chemical process simulators. Units with significant numerical noise or large CPU times are substituted by surrogate models based on Kriging interpolation. Using a degree of freedom analysis, some of those units can be aggregated into a single unit to reduce the complexity of the resulting model. As a result, we solve a hybrid simulation-optimization model formed by units in the original flowsheet, Kriging models, and explicit equations. We present a case study of the optimization of a sour water stripping plant in which we simultaneously consider economics, heat integration and environmental impact using the ReCiPe indicator, which incorporates the recent advances made in Life Cycle Assessment (LCA). The optimization strategy guarantees the convergence to a local optimum inside the tolerance of the numerical noise.The authors wish to acknowledge the financial support by the Ministry of Economy and Competitiveness of Spain, under the project CTQ2012-37039-C02-02

    Statistical Testing of Optimality Conditions in Multiresponse Simulation-Based Optimization (Replaced by Discussion Paper 2007-45)

    Get PDF
    This paper derives a novel procedure for testing the Karush-Kuhn-Tucker (KKT) first-order optimality conditions in models with multiple random responses.Such models arise in simulation-based optimization with multivariate outputs.This paper focuses on expensive simulations, which have small sample sizes.The paper estimates the gradients (in the KKT conditions) through low-order polynomials, fitted locally.These polynomials are estimated using Ordinary Least Squares (OLS), which also enables estimation of the variability of the estimated gradients.Using these OLS results, the paper applies the bootstrap (resampling) method to test the KKT conditions.Furthermore, it applies the classic Student t test to check whether the simulation outputs are feasible, and whether any constraints are binding.The paper applies the new procedure to both a synthetic example and an inventory simulation; the empirical results are encouraging.

    Developing models for the data-based mechanistic approach to systems analysis:Increasing objectivity and reducing assumptions

    Get PDF
    Stochastic State-Space Time-Varying Random Walk models have been developed, allowing the existing Stochastic State Space models to operate directly on irregularly sampled time-series. These TVRW models have been successfully applied to two different classes of models benefiting each class in different ways. The first class of models - State Dependent Parameter (SDP) models and used to investigate the dominant dynamic modes of nonlinear dynamic systems and the non-linearities in these models affected by arbitrary State Variables. In SDP locally linearised models it is assumed that the parameters that describe system’s behaviour changes are dependent upon some aspect of the system (it’s ‘state’). Each parameter can be dependent on one or more states. To estimate the parameters that are changing at a rate related to that of it’s states, the estimation procedure is conducted in the state-space along the potentially multivariate trajectory of the states which drive the parameters. The introduction of the newly developed TVRW models significantly improves parameter estimation, particularly in data rich neighbourhoods of the state-space when the parameter is dependent on more than one state, and the ends of the data-series when the parameter is dependent on one state with few data points. The second class of models are known as Dynamic Harmonic Regression (DHR) models and are used to identify the dominant cycles and trends of time-series. DHR models the assumption is that a signal (such as a time-series) can be broken down into four (unobserved) components occupying different parts of the spectrum: trend, seasonal cycle, other cycles, and a high frequency irregular component. DHR is confined to uniformly sampled time-series. The introduction of the TVRW models allows DHR to operate on irregularly sampled time-series, with the added benefit of forecasting origin no longer being confined to starting at the end of the time-series but can now begin at any point in the future. Additionally, the forecasting sampling rate is no longer limited to the sampling rate of the time-series. Importantly, both classes of model were designed to follow the Data-Based Mechanistic (DBM) approach to modelling environmental systems, where the model structure and parameters are to be determined by the data (Data-Based) and then the subsequent models are to be validated based on their physical interpretation (Mechanistic). The aim is to remove the researcher’s preconceptions from model development in order to eliminate any bias, and then use the researcher’s knowledge to validate the models presented to them. Both classes of model lacked model structure identification procedures and so model structure was determined by the researcher, against the DBM approach. Two different model structure identification procedures, one for SDP and the other for DHR, were developed to bring both classes of models back within the DBM framework. These developments have been presented and tested here on both simulated data and real environmental data, demonstrating their importance, benefits and role in environmental modelling and exploratory data analysis

    Identification of Yeast Transcriptional Regulation Networks Using Multivariate Random Forests

    Get PDF
    The recent availability of whole-genome scale data sets that investigate complementary and diverse aspects of transcriptional regulation has spawned an increased need for new and effective computational approaches to analyze and integrate these large scale assays. Here, we propose a novel algorithm, based on random forest methodology, to relate gene expression (as derived from expression microarrays) to sequence features residing in gene promoters (as derived from DNA motif data) and transcription factor binding to gene promoters (as derived from tiling microarrays). We extend the random forest approach to model a multivariate response as represented, for example, by time-course gene expression measures. An analysis of the multivariate random forest output reveals complex regulatory networks, which consist of cohesive, condition-dependent regulatory cliques. Each regulatory clique features homogeneous gene expression profiles and common motifs or synergistic motif groups. We apply our method to several yeast physiological processes: cell cycle, sporulation, and various stress conditions. Our technique displays excellent performance with regard to identifying known regulatory motifs, including high order interactions. In addition, we present evidence of the existence of an alternative MCB-binding pathway, which we confirm using data from two independent cell cycle studies and two other physioloigical processes. Finally, we have uncovered elaborate transcription regulation refinement mechanisms involving PAC and mRRPE motifs that govern essential rRNA processing. These include intriguing instances of differing motif dosages and differing combinatorial motif control that promote regulatory specificity in rRNA metabolism under differing physiological processes

    Production Optimization Indexed to the Market Demand Through Neural Networks

    Get PDF
    Connectivity, mobility and real-time data analytics are the prerequisites for a new model of intelligent production management that facilitates communication between machines, people and processes and uses technology as the main driver. Many works in the literature treat maintenance and production management in separate approaches, but there is a link between these areas, with maintenance and its actions aimed at ensuring the smooth operation of equipment to avoid unnecessary downtime in production. With the advent of technology, companies are rushing to solve their problems by resorting to technologies in order to fit into the most advanced technological concepts, such as industries 4.0 and 5.0, which are based on the principle of process automation. This approach brings together database technologies, making it possible to monitor the operation of equipment and have the opportunity to study patterns of data behavior that can alert us to possible failures. The present thesis intends to forecast the pulp production indexed to the stock market value.The forecast will be made by means of the pulp production variables of the presses and the stock exchange variables supported by artificial intelligence (AI) technologies, aiming to achieve an effective planning. To support the decision of efficient production management, in this thesis algorithms were developed and validated with from five pulp presses, as well as data from other sources, such as steel production and stock exchange, which were relevant to validate the robustness of the model. This thesis demonstrated the importance of data processing methods and that they have great relevance in the model input since they facilitate the process of training and testing the models. The chosen technologies demonstrated good efficiency and versatility in performing the prediction of the values of the variables of the equipment, also demonstrating robustness and optimization in computational processing. The thesis also presents proposals for future developments, namely in further exploration of these technologies, so that there are market variables that can calibrate production through forecasts supported on these same variables.Conectividade, mobilidade e análise de dados em tempo real são pré-requisitos para um novo modelo de gestão inteligente da produção que facilita a comunicação entre máquinas, pessoas e processos, e usa a tecnologia como motor principal. Muitos trabalhos na literatura tratam a manutenção e a gestão da produção em abordagens separadas, mas existe uma correlação entre estas áreas, sendo que a manutenção e as suas políticas têm como premissa garantir o bom funcionamento dos equipamentos de modo a evitar paragens desnecessárias na linha de produção. Com o advento da tecnologia há uma corrida das empresas para solucionar os seus problemas recorrendo às tecnologias, visando a sua inserção nos conceitos tecnológicos, mais avançados, tais como as indústrias 4.0 e 5.0, as quais têm como princípio a automatização dos processos. Esta abordagem junta as tecnologias de sistema de informação, sendo possível fazer o acompanhamento do funcionamento dos equipamentos e ter a possibilidade de realizar o estudo de padrões de comportamento dos dados que nos possam alertar para possíveis falhas. A presente tese pretende prever a produção da pasta de papel indexada às bolsas de valores. A previsão será feita por via das variáveis da produção da pasta de papel das prensas e das variáveis da bolsa de valores suportadas em tecnologias de artificial intelligence (IA), tendo como objectivo conseguir um planeamento eficaz. Para suportar a decisão de uma gestão da produção eficiente, na presente tese foram desenvolvidos algoritmos, validados em dados de cinco prensas de pasta de papel, bem como dados de outras fontes, tais como, de Produção de Aço e de Bolsas de Valores, os quais se mostraram relevantes para a validação da robustez dos modelos. A presente tese demonstrou a importância dos métodos de tratamento de dados e que os mesmos têm uma grande relevância na entrada do modelo, visto que facilita o processo de treino e testes dos modelos. As tecnologias escolhidas demonstraram uma boa eficiência e versatilidade na realização da previsão dos valores das variáveis dos equipamentos, demonstrando ainda robustez e otimização no processamento computacional. A tese apresenta ainda propostas para futuros desenvolvimentos, designadamente na exploração mais aprofundada destas tecnologias, de modo a que haja variáveis de mercado que possam calibrar a produção através de previsões suportadas nestas mesmas variáveis

    Vehicle level health assessment through integrated operational scalable prognostic reasoners

    Get PDF
    Today’s aircraft are very complex in design and need constant monitoring of the systems to establish the overall health status. Integrated Vehicle Health Management (IVHM) is a major component in a new future asset management paradigm where a conscious effort is made to shift asset maintenance from a scheduled based approach to a more proactive and predictive approach. Its goal is to maximize asset operational availability while minimising downtime and the logistics footprint through monitoring deterioration of component conditions. IVHM involves data processing which comprehensively consists of capturing data related to assets, monitoring parameters, assessing current or future health conditions through prognostics and diagnostics engine and providing recommended maintenance actions. The data driven prognostics methods usually use a large amount of data to learn the degradation pattern (nominal model) and predict the future health. Usually the data which is run-to-failure used are accelerated data produced in lab environments, which is hardly the case in real life. Therefore, the nominal model is far from the present condition of the vehicle, hence the predictions will not be very accurate. The prediction model will try to follow the nominal models which mean more errors in the prediction, this is a major drawback of the data driven techniques. This research primarily presents the two novel techniques of adaptive data driven prognostics to capture the vehicle operational scalability degradation. Secondary the degradation information has been used as a Health index and in the Vehicle Level Reasoning System (VLRS). Novel VLRS are also presented in this research study. The research described here proposes a condition adaptive prognostics reasoning along with VLRS
    corecore