13 research outputs found

    Minimizing Energy Consumption and Carbon Emissions of Aging Buildings

    Get PDF
    AbstractThe building sector in the United States is responsible for 41% of energy consumption and 39% of carbon footprint while the majority of energy consumption and carbon footprint are caused by aging buildings which represent 70% of existing buildings in the United States. The energy consumption of aging buildings can be significantly reduced by identifying and implementing green building upgrade measures based on available budgets. Aging buildings are often in urgent need for upgrading to improve their operational, economic, and environmental performance. This paper presents the development of an optimization model that is capable of identifying the optimal selection of building upgrade measures to minimize energy consumption of aging buildings while complying with limited upgrade budgets and building operational performance. This optimization model is designed to estimate building energy consumption using energy simulation software packages such as eQuest and it is integrated with databases of building products. This optimization model performs analysis of replacing existing building fixtures and equipment during the optimization computations to identify the optimal replacement of building products that minimizes building energy consumption and carbon emissions. The model is designed to provide detailed results for building owners and operators, which include specifications for the recommended upgrade measures and their location in the building; upgrade cost; expected energy, operational, and life-cycle cost savings; and expected payback period. This paper illustrates the new and unique capabilities of the developed optimization model

    Improved sampling of the pareto-front in multiobjective genetic optimizations by steady-state evolution: a Pareto converging genetic algorithm

    Get PDF
    Previous work on multiobjective genetic algorithms has been focused on preventing genetic drift and the issue of convergence has been given little attention. In this paper, we present a simple steady-state strategy, Pareto Converging Genetic Algorithm (PCGA), which naturally samples the solution space and ensures population advancement towards the Pareto-front. PCGA eliminates the need for sharing/niching and thus minimizes heuristically chosen parameters and procedures. A systematic approach based on histograms of rank is introduced for assessing convergence to the Pareto-front, which, by definition, is unknown in most real search problems. We argue that there is always a certain inheritance of genetic material belonging to a population, and there is unlikely to be any significant gain beyond some point; a stopping criterion where terminating the computation is suggested. For further encouraging diversity and competition, a nonmigrating island model may optionally be used; this approach is particularly suited to many difficult (real-world) problems, which have a tendency to get stuck at (unknown) local minima. Results on three benchmark problems are presented and compared with those of earlier approaches. PCGA is found to produce diverse sampling of the Pareto-front without niching and with significantly less computational effort

    On the Convergence of Immune Algorithms

    Full text link

    A stopping criterion for multi-objective optimization evolutionary algorithms

    Get PDF
    This Paper Puts Forward A Comprehensive Study Of The Design Of Global Stopping Criteria For Multi-Objective Optimization. In This Study We Propose A Global Stopping Criterion, Which Is Terms As Mgbm After The Authors Surnames. Mgbm Combines A Novel Progress Indicator, Called Mutual Domination Rate (Mdr) Indicator, With A Simplified Kalman Filter, Which Is Used For Evidence-Gathering Purposes. The Mdr Indicator, Which Is Also Introduced, Is A Special-Purpose Progress Indicator Designed For The Purpose Of Stopping A Multi-Objective Optimization. As Part Of The Paper We Describe The Criterion From A Theoretical Perspective And Examine Its Performance On A Number Of Test Problems. We Also Compare This Method With Similar Approaches To The Issue. The Results Of These Experiments Suggest That Mgbm Is A Valid And Accurate Approach. (C) 2016 Elsevier Inc. All Rights Reserved.This work was funded in part by CNPq BJT Project 407851/2012-7 and CNPq PVE Project 314017/2013-

    Elucidation of chemical reaction networks through genetic algorithm

    Get PDF
    PhD ThesisObtaining chemical reaction network experimentally is a time consuming and expensive method. It requires a lot of specialised equipment and expertise in order to achieve concrete results. Using data mining method on available quantitative information such as concentration data of chemical species can help build the chemical reaction network faster, cheaper and with less expertise. The aim of this work is to design an automated system to determine the chemical reaction network (CRN) from the concentration data of participating chemical species in an isothermal chemical batch reactor. Evolutionary algorithm ability to evolve optimum results for a non-linear problem is chosen as the method to go forward. Genetic algorithm’s simplicity is modified such that it can be used to model the CRN with just integers. The developed automated system has shown it can elucidate the CRN of two fictitious CRNs requiring only a few a priori information such as initial chemical species concentration and molecular weight of chemical species. Robustness of the automated system is tested multiple times with different level of noise in system and introduction of unmeasured chemical species and uninvolved chemical species. The automated system is also tested against an experimental data from the reaction of trimethyl orthoacetate and allyl alcohol which had shown mixed results. This had prompted for the inclusion of NSGA-II algorithm in the automated system to increase its ability to discover multiple reactions. At the end of the work, a final form of the automated system is presented which can process datasets from different initial conditions and different operating temperature which shows a good performance in elucidating the CRNs. It is concluded that automated system is susceptible to ‘overfitting’ where it designs its CRN structure to fit the measured chemical species but with enough variation in the data, it had shown it is capable of elucidating the true CRN even in the presence of unmeasured chemical species, noise and unrelated chemical species

    OPTIMIZATION OF TEST/DIAGNOSIS/REWORK LOCATION(S) AND CHARACTERISTICS IN ELECTRONIC SYSTEMS ASSEMBLY

    Get PDF
    ABSTRACT Title of Dissertation: OPTIMIZATION OF TEST/DIAGNOSIS/REWORK LOCATION(S) AND CHARACTERISTICS IN ELECTRONIC SYSTEMS ASSEMBLY Zhen Shi, Doctor of Philosophy, 2004 Dissertation directed by: Associate Professor Peter A. Sandborn Department of Mechanical Engineering For electronic systems it is not uncommon for 60% or more of the recurring cost to be associated with testing. Performing tradeoffs associated with where in a process to test and what level of test, diagnosis and rework to perform are key to optimizing the cost and yield of an electronic system's assembly. In this dissertation, a methodology that uses a real-coded genetic algorithm has been developed to minimize the yielded cost of electronic products by optimizing the locations of test, diagnosis and rework operations and their characteristics. This dissertation presents a test, diagnosis, and rework analysis model for use in electronic systems assembly. The approach includes a model of functional test operations characterized by fault coverage, false positives, and defects introduced in test; in addition, rework and diagnosis operations (diagnostic test) have variable success rates and their own defect introduction mechanisms. The model accommodates multiple rework attempts on a product instance. For use in practical assembly processes, the model has been extended by defining a general form of the relationship between test cost and fault coverage. The model is applied within a framework for optimizing the location(s) and characteristics (fault coverage/test cost and rework attempts) of Test/Diagnosis/Rework (TDR) operations in a general assembly process. A new search algorithm called Waiting Sequence Search (WSS) is applied to traverse a general process flow to perform the cumulative calculation of a yielded cost objective function. Real-Coded Genetic Algorithms (RCGAs) are used to perform a multi-variable optimization that minimizes yielded cost. Several simple cases are analyzed for validation and general complex process flows are used to demonstrate the applicability of the algorithm. A real multichip module (MCM) manufacturing and assembly process is used to demonstrate that the optimization methodology developed in this dissertation can find test and rework solutions that have lower yielded cost than solutions calculated by manually choosing the test strategies and characteristics. The optimization methodology with Monte Carlo methods included for the process flow under uncertain inputs is also addressed in this dissertation. It is anticipated that this research will improve the ability of manufacturing engineers to place TDR operations in a process flow. The ability to optimize the TDR operations can also be used as a feedback to a Design for Test (DFT) analysis of the electronic systems showing which portion of the system should be redesigned to accommodate testing for a higher level of fault coverage, and where there is less need for test

    Directed Intervention Crossover Approaches in Genetic Algorithms with Application to Optimal Control Problems

    Get PDF
    Genetic Algorithms (GAs) are a search heuristic modeled on the processes of evolution. They have been used to solve optimisation problems in a wide variety of fields. When applied to the optimisation of intervention schedules for optimal control problems, such as cancer chemotherapy treatment scheduling, GAs have been shown to require more fitness function evaluations than other search heuristics to find fit solutions. This thesis presents extensions to the GA crossover process, termed directed intervention crossover techniques, that greatly reduce the number of fitness function evaluations required to find fit solutions, thus increasing the effectiveness of GAs for problems of this type. The directed intervention crossover techniques use intervention scheduling information from parent solutions to direct the offspring produced in the GA crossover process towards more promising areas of a search space. By counting the number of interventions present in parents and adjusting the number of interventions for offspring schedules around it, this allows for highly fit solutions to be found in less fitness function evaluations. The validity of these novel approaches are illustrated through comparison with conventional GA crossover approaches for optimisation of intervention schedules of bio-control application in mushroom farming and cancer chemotherapy treatment. These involve optimally scheduling the application of a bio-control agent to combat pests in mushroom farming and optimising the timing and dosage strength of cancer chemotherapy treatments to maximise their effectiveness. This work demonstrates that significant advantages are gained in terms of both fitness function evaluations required and fitness scores found using the proposed approaches when compared with traditional GA crossover approaches for the production of optimal control schedules

    Optimizing construction and utilization of wheat storage facilities to minimize post-harvest losses

    Get PDF
    The use of inefficient wheat storage and transportation facilities in developing countries often causes significant quantity and quality losses. These post-harvest losses are estimated to be as much as 20% of harvested wheat and a study by the Government of India puts the total preventable wheat losses at 10% of total production. These post-harvest wheat losses in developing countries can be minimized by (1) optimizing wheat storage and transportation throughout the entire supply chain network of existing facilities in villages, local markets, and regional locations; (2) constructing new public storage facilities that are funded and/or subsidized by government to expand and improve the existing storage facilities; and (3) building new private storage facilities that are funded by farmers to minimize post-harvest losses, maximize profitability of farmers, and improve their food security. The main goal of this research study is to develop novel models for optimizing the storage and transportation of wheat to minimize post-harvest losses. To accomplish this, the research objectives of this study are to (1) conduct a comprehensive literature review to study local conditions, (2) develop a novel model for optimizing the storage and transportation of wheat using existing facilities in developing countries, (3) develop an innovative model for optimizing the construction of public wheat storage facilities that are funded and/or subsidized by government or other agencies, and (4) develop a novel model for optimizing the construction and utilization of private wheat storage facilities that are cooperatively funded by farmers. The performance of the developed optimization models is analyzed and verified using case studies. The results of these case studies illustrate the novel and unique capabilities of the developed models in searching for and identifying optimal storage and transportation decisions. These new and unique capabilities are expected to support decision makers such as governments and farmers in identifying (i) optimal wheat storage levels in each existing facility and optimal transportation routes among them to minimize post-harvest losses and minimize storage and transportation costs throughout the entire network; (ii) optimal location, type, and capacity for the construction of new publicly-funded storage facilities to minimize post-harvest losses during storage and transportation throughout the entire network; and (iii) optimal construction decisions for privately-funded storage facilities and optimal wheat sales, purchases and storage quantities to minimize post-harvest losses and maximize the profit of farmers. The expected impact of the developed optimization models include (a) reduced post-harvest losses during wheat storage and transportation; (b) minimized storage and transportation costs throughout the entire network of existing and new storage facilities; (c) increased annual profits for farmers; (d) enhanced food security for local farmers by increasing the storage capacity in their villages; and (e) expanded storage capacity for grain reserves and for potential increases in wheat production

    New local search in the space of infeasible solutions framework for the routing of vehicles

    Get PDF
    Combinatorial optimisation problems (COPs) have been at the origin of the design of many optimal and heuristic solution frameworks such as branch-and-bound algorithms, branch-and-cut algorithms, classical local search methods, metaheuristics, and hyperheuristics. This thesis proposes a refined generic and parametrised infeasible local search (GPILS) algorithm for solving COPs and customises it to solve the traveling salesman problem (TSP), for illustration purposes. In addition, a rule-based heuristic is proposed to initialise infeasible local search, referred to as the parameterised infeasible heuristic (PIH), which allows the analyst to have some control over the features of the infeasible solution he/she might want to start the infeasible search with. A recursive infeasible neighbourhood search (RINS) as well as a generic patching procedure to search the infeasible space are also proposed. These procedures are designed in a generic manner, so they can be adapted to any choice of parameters of the GPILS, where the set of parameters, in fact for simplicity, refers to set of parameters, components, criteria and rules. Furthermore, a hyperheuristic framework is proposed for optimizing the parameters of GPILS referred to as HH-GPILS. Experiments have been run for both sequential (i.e. simulated annealing, variable neighbourhood search, and tabu search) and parallel hyperheuristics (i.e., genetic algorithms / GAs) to empirically assess the performance of the proposed HH-GPILS in solving TSP using instances from the TSPLIB. Empirical results suggest that HH-GPILS delivers an outstanding performance. Finally, an offline learning mechanism is proposed as a seeding technique to improve the performance and speed of the proposed parallel HH-GPILS. The proposed offline learning mechanism makes use of a knowledge-base to keep track of the best performing chromosomes and their scores. Empirical results suggest that this learning mechanism is a promising technique to initialise the GA’s population

    Risk-Aware Neural Network Ensembles

    Get PDF
    Autonomous systems with safety-critical concerns such as self-driving vehicles must be able to mitigate risk by dependably detecting entities that represent factors of risk in their environment (e.g., humans and obstacles). Nevertheless, the machine learning (ML) techniques that these systems use for image classification and real-time object detection disregard risk factors in their training and verification. As such, they produce ML models that place equal emphasis on the correct detection of all classes of objects of interest---including, for instance, buses, pedestrians and birds in a self-driving scenario. To address this limitation of existing solutions, this thesis proposes an approach for the development of risk-aware ML ensembles applied to image classification. The new approach (i) allows the risk of misclassification between different pairs of classes to be quantified individually, (ii) guides the training of deep neural network classifiers towards mitigating the risks that require treatment, and (iii) synthesises risk-aware ensembles with the aid of multi-objective genetic algorithms that seek to optimise the ensemble performance metrics while also mitigating risks. Additionally, the thesis extends the applicability of this approach to real-time object detection (RTOD) deep neural networks. RTOD involves detecting objects of interest and their positions within an image using bounding boxes, and the RTOD extension of the approach employs a suite of new algorithms to combine the bounding box predictions of the models from the risk-aware RTOD ensemble. Last but not least, the thesis introduces a self-adaptation approach that leverages risk-aware RTOD ensembles to improve the safety of an autonomous system. To that end, the new approach switches dynamically between ensembles with different risk-aware profiles as the system moves between regions of its operational design domain. This dynamic RTOD selection approach is shown to reduce the number of crashes and to increase the number of correct actions for a simulated autonomous vehicle
    corecore