3,984 research outputs found

    Three essays on bilevel optimization algorithms and applications

    Get PDF
    This thesis consists of three journal papers I have worked on during the past three years of my PhD studies. In the first paper, we presented a multi-objective integer programming model for the gene stacking problem. Although the gene stacking problem is proved to be NP-hard, we have been able to obtain Pareto frontiers for smaller sized instances within one minute using the state-of-the-art commercial computer solvers in our computational experiments. In the second paper, we presented an exact algorithm for the bilevel mixed integer linear programming (BMILP) problem under three simplifying assumptions. Compared to these existing ones, our new algorithm relies on weaker assumptions, explicitly considers infinite optimal, infeasible, and unbounded cases, and is proved to terminate infinitely with the correct output. We report results of our computational experiments on a small library of BMILP test instances, which we have created and made publicly available online. In the third paper, we presented the watermelon algorithm for the bilevel integer linear programming (BILP) problem. To our best knowledge, it is the first exact algorithm which promises to solve all possible BILPs, including finitely optimal, infeasible, and unbounded cases. What is more, our algorithm does not rely on any simplifying condition, allowing even the case of unboundedness for the high point problem. We prove that the watermelon algorithm must finitely terminate with the correct output. Computational experiments are also reported showing the efficiency of our algorithm

    Leo: Lagrange Elementary Optimization

    Full text link
    Global optimization problems are frequently solved using the practical and efficient method of evolutionary sophistication. But as the original problem becomes more complex, so does its efficacy and expandability. Thus, the purpose of this research is to introduce the Lagrange Elementary Optimization (Leo) as an evolutionary method, which is self-adaptive inspired by the remarkable accuracy of vaccinations using the albumin quotient of human blood. They develop intelligent agents using their fitness function value after gene crossing. These genes direct the search agents during both exploration and exploitation. The main objective of the Leo algorithm is presented in this paper along with the inspiration and motivation for the concept. To demonstrate its precision, the proposed algorithm is validated against a variety of test functions, including 19 traditional benchmark functions and the CECC06 2019 test functions. The results of Leo for 19 classic benchmark test functions are evaluated against DA, PSO, and GA separately, and then two other recent algorithms such as FDO and LPB are also included in the evaluation. In addition, the Leo is tested by ten functions on CECC06 2019 with DA, WOA, SSA, FDO, LPB, and FOX algorithms distinctly. The cumulative outcomes demonstrate Leo's capacity to increase the starting population and move toward the global optimum. Different standard measurements are used to verify and prove the stability of Leo in both the exploration and exploitation phases. Moreover, Statistical analysis supports the findings results of the proposed research. Finally, novel applications in the real world are introduced to demonstrate the practicality of Leo.Comment: 28 page

    Continuous Biochemical Processing: Investigating Novel Strategies to Produce Sustainable Fuels and Pharmaceuticals

    Get PDF
    Biochemical processing methods have been targeted as one of the potential renewable strategies for producing commodities currently dominated by the petrochemical industry. To design biochemical systems with the ability to compete with petrochemical facilities, inroads are needed to transition from traditional batch methods to continuous methods. Recent advancements in the areas of process systems and biochemical engineering have provided the tools necessary to study and design these continuous biochemical systems to maximize productivity and substrate utilization while reducing capital and operating costs. The first goal of this thesis is to propose a novel strategy for the continuous biochemical production of pharmaceuticals. The structural complexity of most pharmaceutical compounds makes chemical synthesis a difficult option, facilitating the need for their biological production. To this end, a continuous, multi-feed bioreactor system composed of multiple independently controlled feeds for substrate(s) and media is proposed to freely manipulate the bioreactor dilution rate and substrate concentrations. The optimal feed flow rates are determined through the solution to an optimal control problem where the kinetic models describing the time-variant system states are used as constraints. This new bioreactor paradigm is exemplified through the batch and continuous cultivation of β-carotene, a representative product of the mevalonate pathway, using Saccharomyces cerevisiae strain mutant SM14. The second goal of this thesis is to design continuous, biochemical processes capable of economically producing alternative liquid fuels. The large-scale, continuous production of ethanol via consolidated bioprocessing (CBP) is examined. Optimal process topologies for the CBP technology selected from a superstructure considering multiple biomass feeds, chosen from those available across the United States, and multiple prospective pretreatment technologies. Similarly, the production of butanol via acetone-butanol-ethanol (ABE) fermentation is explored using process intensification to improve process productivity and profitability. To overcome the inhibitory nature of the butanol product, the multi-feed bioreactor paradigm developed for pharmaceutical production is utilized with in situ gas stripping to simultaneously provide dilution effects and selectively remove the volatile ABE components. Optimal control and process synthesis techniques are utilized to determine the benefits of gas stripping and design a butanol production process guaranteed to be profitable

    Optimizing an in Situ Bioremediation Technology to Manage Perchlorate-Contaminated Groundwater

    Get PDF
    Combining horizontal flow treatment wells (HFTWs) with in situ biodegradation is an innovative approach with the potential to remediate perchlorate-contaminated groundwater. A technology model was recently developed that combines the groundwater flow induced by HFTWs with in situ biodegration processes that result from using the HFTWs to mix electron donor into perchlorate-contaminated groundwater. A field demonstration of this approach is planned to begin this year. In order to apply the technology in the field, project managers need to understand how contaminated site conditions and technology design parameters impact technology performance. One way to gain this understanding is to use the technology model to select engineering design parameters that optimize performance under given site conditions. In particular, a project manager desires to design a system that: 1) maximizes perchlorate destruction; 2) minimizes treatment expense; and 3) attains regulatory limits on down gradient contaminant concentrations. Unfortunately, for a relatively complex technology with a number of engineering design parameters to determine, as well as multiple objectives, system optimization is not straight forward. In this study, a multi-objective genetic algorithm (MOGA) is used to determine design parameter values (flow rate, well spacing, concentration of injection electron donor, and injection schedule) that optimize the first two objectives noted; to maximize perchlorate destruction while minimizing cost. Four optimization runs are performed, using two different remediation time spans (300 and 600 days) for two different sets of site conditions. Results from all four optimization runs indicate that the relationship between perchlorate mass removal and operating cost is positively correlated and nonlinear

    Histopathological image analysis : a review

    Get PDF
    Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe

    Co-evolutionary Hybrid Bi-level Optimization

    Get PDF
    Multi-level optimization stems from the need to tackle complex problems involving multiple decision makers. Two-level optimization, referred as ``Bi-level optimization'', occurs when two decision makers only control part of the decision variables but impact each other (e.g., objective value, feasibility). Bi-level problems are sequential by nature and can be represented as nested optimization problems in which one problem (the ``upper-level'') is constrained by another one (the ``lower-level''). The nested structure is a real obstacle that can be highly time consuming when the lower-level is NP−hard\mathcal{NP}-hard. Consequently, classical nested optimization should be avoided. Some surrogate-based approaches have been proposed to approximate the lower-level objective value function (or variables) to reduce the number of times the lower-level is globally optimized. Unfortunately, such a methodology is not applicable for large-scale and combinatorial bi-level problems. After a deep study of theoretical properties and a survey of the existing applications being bi-level by nature, problems which can benefit from a bi-level reformulation are investigated. A first contribution of this work has been to propose a novel bi-level clustering approach. Extending the well-know ``uncapacitated k-median problem'', it has been shown that clustering can be easily modeled as a two-level optimization problem using decomposition techniques. The resulting two-level problem is then turned into a bi-level problem offering the possibility to combine distance metrics in a hierarchical manner. The novel bi-level clustering problem has a very interesting property that enable us to tackle it with classical nested approaches. Indeed, its lower-level problem can be solved in polynomial time. In cooperation with the Luxembourg Centre for Systems Biomedicine (LCSB), this new clustering model has been applied on real datasets such as disease maps (e.g. Parkinson, Alzheimer). Using a novel hybrid and parallel genetic algorithm as optimization approach, the results obtained after a campaign of experiments have the ability to produce new knowledge compared to classical clustering techniques combining distance metrics in a classical manner. The previous bi-level clustering model has the advantage that the lower-level can be solved in polynomial time although the global problem is by definition NP\mathcal{NP}-hard. Therefore, next investigations have been undertaken to tackle more general bi-level problems in which the lower-level problem does not present any specific advantageous properties. Since the lower-level problem can be very expensive to solve, the focus has been turned to surrogate-based approaches and hyper-parameter optimization techniques with the aim of approximating the lower-level problem and reduce the number of global lower-level optimizations. Adapting the well-know bayesian optimization algorithm to solve general bi-level problems, the expensive lower-level optimizations have been dramatically reduced while obtaining very accurate solutions. The resulting solutions and the number of spared lower-level optimizations have been compared to the bi-level evolutionary algorithm based on quadratic approximations (BLEAQ) results after a campaign of experiments on official bi-level benchmarks. Although both approaches are very accurate, the bi-level bayesian version required less lower-level objective function calls. Surrogate-based approaches are restricted to small-scale and continuous bi-level problems although many real applications are combinatorial by nature. As for continuous problems, a study has been performed to apply some machine learning strategies. Instead of approximating the lower-level solution value, new approximation algorithms for the discrete/combinatorial case have been designed. Using the principle employed in GP hyper-heuristics, heuristics are trained in order to tackle efficiently the NP−hard\mathcal{NP}-hard lower-level of bi-level problems. This automatic generation of heuristics permits to break the nested structure into two separated phases: \emph{training lower-level heuristics} and \emph{solving the upper-level problem with the new heuristics}. At this occasion, a second modeling contribution has been introduced through a novel large-scale and mixed-integer bi-level problem dealing with pricing in the cloud, i.e., the Bi-level Cloud Pricing Optimization Problem (BCPOP). After a series of experiments that consisted in training heuristics on various lower-level instances of the BCPOP and using them to tackle the bi-level problem itself, the obtained results are compared to the ``cooperative coevolutionary algorithm for bi-level optimization'' (COBRA). Although training heuristics enables to \emph{break the nested structure}, a two phase optimization is still required. Therefore, the emphasis has been put on training heuristics while optimizing the upper-level problem using competitive co-evolution. Instead of adopting the classical decomposition scheme as done by COBRA which suffers from the strong epistatic links between lower-level and upper-level variables, co-evolving the solution and the mean to get to it can cope with these epistatic link issues. The ``CARBON'' algorithm developed in this thesis is a competitive and hybrid co-evolutionary algorithm designed for this purpose. In order to validate the potential of CARBON, numerical experiments have been designed and results have been compared to state-of-the-art algorithms. These results demonstrate that ``CARBON'' makes possible to address nested optimization efficiently

    Creation and Application of Various Tools for the Reconstruction, Curation, and Analysis of Genome-Scale Models of Metabolism

    Get PDF
    Systems biology uses mathematics tools, modeling, and analysis for holistic understanding and design of biological systems, allowing the investigation of metabolism and the generation of actionable hypotheses based on model analyses. Detailed here are several systems biology tools for model reconstruction, curation, analysis, and application through synthetic biology. The first, OptFill, is a holistic (whole model) and conservative (minimizing change) tool to aid in genome-scale model (GSM) reconstructions by filling metabolic gaps caused by lack of system knowledge. This is accomplished through Mixed Integer Linear Programming (MILP), one step of which may also be independently used as an additional curation tool. OptFill is applied to a GSM reconstruction of the melanized fungus Exophiala dermatitidis, which underwent various analyses investigating pigmentogenesis and similarity to human melanogenesis. Analysis suggest that carotenoids serve a currently unknown function in E. dermatitidis and that E. dermatitidis could serve as a model of human melanocytes for biomedical applications. Next, a new approach to dynamic Flux Balance Analysis (dFBA) is detailed, the Optimization- and Runge-Kutta- based Approach (ORKA). The ORKA is applied to the model plant Arabidopsis thaliana to show its ability to recreate in vivo observations. The analyzed model is more detailed than previous models, encompassing a larger time scale, modeling more tissues, and with higher accuracy. Finally, a pair of tools, the Eukaryotic Genetic Circuit Design (EuGeneCiD) and Modeling (EuGeneCiM) tools, is introduced which can aid in the design and modeling of synthetic biology applications hypothesized using systems biology. These tools bring a computational approach to synthetic biology, and are applied to Arabidopsis thaliana to design thousands of potential two-input genetic circuits which satisfy 27 different input and logic gate combinations. EuGeneCiM is further used to model a repressilator circuit. Efforts are ongoing to disseminate these tools to maximize their impact on the field of systems biology. Future research will include further investigation of E. dermatitidis through modeling and expanding my expertise to kinetic models of metabolism. Advisor: Rajib Sah

    Air Force Institute of Technology Research Report 2007

    Get PDF
    This report summarizes the research activities of the Air Force Institute of Technology’s Graduate School of Engineering and Management. It describes research interests and faculty expertise; lists student theses/dissertations; identifies research sponsors and contributions; and outlines the procedures for contacting the school. Included in the report are: faculty publications, conference presentations, consultations, and funded research projects. Research was conducted in the areas of Aeronautical and Astronautical Engineering, Electrical Engineering and Electro-Optics, Computer Engineering and Computer Science, Systems and Engineering Management, Operational Sciences, Mathematics, Statistics and Engineering Physics
    • …
    corecore