724 research outputs found

    Multi-Objective and Multi-Attribute Optimisation for Sustainable Development Decision Aiding

    Get PDF
    Optimization is considered as a decision-making process for getting the most out of available resources for the best attainable results. Many real-world problems are multi-objective or multi-attribute problems that naturally involve several competing objectives that need to be optimized simultaneously, while respecting some constraints or involving selection among feasible discrete alternatives. In this Reprint of the Special Issue, 19 research papers co-authored by 88 researchers from 14 different countries explore aspects of multi-objective or multi-attribute modeling and optimization in crisp or uncertain environments by suggesting multiple-attribute decision-making (MADM) and multi-objective decision-making (MODM) approaches. The papers elaborate upon the approaches of state-of-the-art case studies in selected areas of applications related to sustainable development decision aiding in engineering and management, including construction, transportation, infrastructure development, production, and organization management

    Differential Evolution: A Survey and Analysis

    Get PDF
    Differential evolution (DE) has been extensively used in optimization studies since its development in 1995 because of its reputation as an effective global optimizer. DE is a population-based metaheuristic technique that develops numerical vectors to solve optimization problems. DE strategies have a significant impact on DE performance and play a vital role in achieving stochastic global optimization. However, DE is highly dependent on the control parameters involved. In practice, the fine-tuning of these parameters is not always easy. Here, we discuss the improvements and developments that have been made to DE algorithms. In particular, we present a state-of-the-art survey of the literature on DE and its recent advances, such as the development of adaptive, self-adaptive and hybrid techniques.http://dx.doi.org/10.3390/app810194

    Large-Scale Evolutionary Optimization Using Multi-Layer Strategy Differential Evolution

    Get PDF
    Differential evolution (DE) has been extensively used in optimization studies since its development in 1995 because of its reputation as an effective global optimizer. DE is a population-based meta-heuristic technique that develops numerical vectors to solve optimization problems. DE strategies have a significant impact on DE performance and play a vital role in achieving stochastic global optimization. However, DE is highly dependent on the control parameters involved. In practice, the fine-tuning of these parameters is not always easy. Here, we discuss the improvements and developments that have been made to DE algorithms. The Multi-Layer Strategies Differential Evolution (MLSDE) algorithm, which finds optimal solutions for large scale problems. To solve large scale problems were grouped different strategies together and applied them to date set. Furthermore, these strategies were applied to selected vectors to strengthen the exploration ability of the algorithm. Extensive computational analysis was also carried out to evaluate the performance of the proposed algorithm on a set of well-known CEC 2015 benchmark functions. This benchmark was utilized for the assessment and performance evaluation of the proposed algorithm

    Evolutionary Computation

    Get PDF
    This book presents several recent advances on Evolutionary Computation, specially evolution-based optimization methods and hybrid algorithms for several applications, from optimization and learning to pattern recognition and bioinformatics. This book also presents new algorithms based on several analogies and metafores, where one of them is based on philosophy, specifically on the philosophy of praxis and dialectics. In this book it is also presented interesting applications on bioinformatics, specially the use of particle swarms to discover gene expression patterns in DNA microarrays. Therefore, this book features representative work on the field of evolutionary computation and applied sciences. The intended audience is graduate, undergraduate, researchers, and anyone who wishes to become familiar with the latest research work on this field

    Automated calibration of a Carbon dynamic model for lakes and reservoirs : (calibração automática de um modelo de dinâmica de Carbono em lagos e reservatórios)

    Get PDF
    Orientador : Michael MannichCoorientador : Cristóvão Vicente Scapulatempo FernandesDissertação (mestrado) - Universidade Federal do Paraná, Setor de Tecnologia, Programa de Pós-Graduação em Engenharia de Recursos Hídricos e Ambiental. Defesa: Curitiba, 16/03/2017Inclui referências e apêndicesResumo: A carência de medidas de fluxos de gases de efeito estufa (GEE), junto com as incertezas referentes às extrapolações de emissões pontuais para emissões totais, resultam em conclusões imprecisas referente a participação de reservatórios no clima global. O modelo matemático CICLAR é usado para simular fluxos de CO2 e CH4 por 45 anos no reservatório de Capivari, Paraná, Brasil. O modelo é estruturado em compartimentos de diferentes formas de carbono, como o carbono inorgânico dissolvido (CID) e o carbono orgânico particulado vivo (COPL). Processos químicos de transferência de massa entre compartimentos são modelados como reações de primeira ordem e de saturação que são controladas por parâmetros numéricos. O valor destes parâmetros são calibrados através da minimização de diferenças entre dados observados e modelados através de algoritmos de calibração. O algoritmo metaheuristico de Otimização Multi-objetivo por Enxame de Particulas Combinada de Pareto (CPMOPSO), que combina técnicas de seleção de líderes, mutações e subenxames, foi desenvolvido e aplicado como método de otimização. O algoritmo de calibração automática utiliza dados provenientes da calibração manual. Quatro cenários foram analisados: o avaliativo, que usa os primeiros 30 e os últimos 15 anos de dados do reservatório para calibrar e validar o modelo; e o retrospective, o prospectivo e o ideal, que usam 9 anos de dados, distribuídos de maneiras diferentes, para calibrar o modelo. A qualidade dos resultados da calibração foi positivamente considerada através do uso do cenário avaliativo. Os resultados da calibração sob os cenários retrospectivo e prospectivo mostraram que o algoritmo tende a superestimar emissões de metano se dados mal distribuídos são utilizados. A otimização sob o cenário ideal obteve melhores resultados e mostrou que a disposição dos dados tem maior impacto do que a quantidade sobre a calibração. Todas as soluções sob todos os cenários obtiveram soluções com coeficientes de Nash-Sutcliffe superiores a 0.95 para o período de calibração. As distribuições acumuladas das médias dos Potenciais de Aquecimento Global (GWP) mostraram que a maioria das soluções calibradas classificam o reservatório como um sumidouro de dióxido de carbono equivalente, absorvendo até 90 Gg de CO2 eq. Estimativas alternativas de estoque de carbono foram utilizadas para calibrar o modelo em um escopo em que nenhuma solução prévia é conhecida. São feitas considerações adicionas referentes a aplicação de métodos de análise de incertezas e agregação Bayesiana para melhor aferir múltiplos conjuntos de parâmetros. Palavras-chaves: Modelagem matemática. Dinâmica do carbono. Gases de efeito estufa. Potencial de aquecimento global. Enxame de partículas. Dominância de Pareto.Abstract: The low availability of measured greenhouse gas (GHG) fluxes for lakes and reservoirs, coupled with uncertainties regarding extrapolating total reservoir emissions from point measurements, result in inaccurate conclusions regarding the role of reservoirs in the global climate. The Carbon Cycle in Lakes and Reservoirs (CICLAR) model is used to study potential contributions, through carbon dioxide (CO2) and methane (CH4) emissions, of the Capivari reservoir, Brazil, since its construction in 1970. The model is structured in compartments for different carbon forms, such as dissolved inorganic carbon (DIC) and live particulate organic carbon (POCL), and model chemical processes as first order reactions controlled by numerical parameters. The values of these parameters are calibrated by minimizing differences between original and modeled data through an optimization algorithm. The Combined Pareto Multi-objective Particle Swarm Optimization (CPMOPSO) metaheuristic algorithm, which combines leader selection, mutation and subswarm techniques, is developed and successfully used as the optimization technique. The automated calibration algorithm uses data originated from the manual calibration. Four calibration scenarios are used to analyze the impact of data disposition in the calibration results: the evaluative scenario that has the initial 30 years to calibrate and the final 15 to validate the model; and the retrospective, prospective and ideal scenarios, that uses 9 years of data differently distributed. The evaluative data scenario is used to assess the quality of the calibration results, which successfully fit the validation data. The retrospective and prospective scenario are used to analyze the performance of the calibration under unevenly spread data, and the results show that the model had a bias to overestimate methane emissions. The calibration under the ideal scenario is used to show that having evenly spread data has a bigger impact on calibration results than having larger amounts of data. All calibrated solutions for all scenarios present Nash-Sutcliffe coefficient values higher than 0.95 for the calibration period. The cumulative distribution of average Global Warming Potential (GWP) indexes shows that most calibrated solutions estimated that the Capivari reservoir is a sinkhole for equivalent carbon dioxide and that it can absorb up to 90 Gg of equivalent CO2. Alternative carbon stock estimations are used to calibrate the model under a framework in which the results cannot be validated due to no previous solutions being known. Further consideration are drawn regarding the application of uncertainty analysis and Bayesian aggregation methods to better assess the combination of multiple set of parameters. Keywords: Mathematical modeling. Carbon dynamics. Greenhouse gases. Global warming potential. Particle swarm optimization. Pareto dominance

    Integration of High Voltage AC/DC Grids into Modern Power Systems

    Get PDF
    Electric power transmission relies on AC and DC grids. The extensive integration of conventional and nonconventional energy sources and power converters into power grids has resulted in a demand for high voltage (HV), extra-high voltage (EHV), and ultra-high voltage (UHV) AC/DC transmission grids in modern power systems. To ensure the security, adequacy, and reliable operation of power systems, the practical aspects of interconnecting HV, EHV, and UHV AC/DC grids into the electric power systems, along with their economic and environmental impacts, should be considered. The stability analysis for the planning and operation of HV, EHV, and UHV AC/DC grids in power systems is regarded as another key issue in modern power systems. Moreover, interactions between power converters and other power electronics devices (e.g., FACTS devices) installed on the network are other aspects of power systems that must be addressed. This Special Issue aims to investigate the integration of HV, EHV, and UHV AC/DC grids into modern power systems by analyzing their control, operation, protection, dynamics, planning, reliability, and security, along with considering power quality improvement, market operations, power conversion, cybersecurity, supervisory and monitoring, diagnostics, and prognostics systems

    Ecodesign of large-scale photovoltaic (PV) systems with multi-objective optimization and Life-Cycle Assessment (LCA)

    Get PDF
    Because of the increasing demand for the provision of energy worldwide and the numerous damages caused by a major use of fossil sources, the contribution of renewable energies has been increasing significantly in the global energy mix with the aim at moving towards a more sustainable development. In this context, this work aims at the development of a general methodology for designing PV systems based on ecodesign principles and taking into account simultaneously both techno-economic and environmental considerations. In order to evaluate the environmental performance of PV systems, an environmental assessment technique was used based on Life Cycle Assessment (LCA). The environmental model was successfully coupled with the design stage model of a PV grid-connected system (PVGCS). The PVGCS design model was then developed involving the estimation of solar radiation received in a specific geographic location, the calculation of the annual energy generated from the solar radiation received, the characteristics of the different components and the evaluation of the techno-economic criteria through Energy PayBack Time (EPBT) and PayBack Time (PBT). The performance model was then embedded in an outer multi-objective genetic algorithm optimization loop based on a variant of NSGA-II. A set of Pareto solutions was generated representing the optimal trade-off between the objectives considered in the analysis. A multi-variable statistical method (i.e., Principal Componet Analysis, PCA) was then applied to detect and omit redundant objectives that could be left out of the analysis without disturbing the main features of the solution space. Finally, a decision-making tool based on M-TOPSIS was used to select the alternative that provided a better compromise among all the objective functions that have been investigated. The results showed that while the PV modules based on c-Si have a better performance in energy generation, the environmental aspect is what makes them fall to the last positions. TF PV modules present the best trade-off in all scenarios under consideration. A special attention was paid to recycling process of PV module even if there is not yet enough information currently available for all the technologies evaluated. The main cause of this lack of information is the lifetime of PV modules. The data relative to the recycling processes for m-Si and CdTe PV technologies were introduced in the optimization procedure for ecodesign. By considering energy production and EPBT as optimization criteria into a bi-objective optimization cases, the importance of the benefits of PV modules end-of-life management was confirmed. An economic study of the recycling strategy must be investigated in order to have a more comprehensive view for decision making

    Selection of micromilling conditions for improved productivity and part quality

    Get PDF
    Micromilling process has a rising demand in recent years where the production industry is becoming more competitive with the advancing technology. In hi-tech industries such as aerospace, biomedical, electronics, the increasing demand for high precise micro products with complex geometries create necessity to improve micro machining processes for more accurate, repeatable, and efficient production. Micro end milling process is relatively a new developing research area. It varies from conventional milling with its unique cutting dynamics due to the geometrical size reduction. In terms of the both micro tool and the workpiece, size reduction brings new difficulties to the process. The ratio of hone radius to uncut chip thickness creates a great difference between micro and conventional milling processes and precludes the application of conventional milling models to micromilling process. By the nature of the micromilling and its industries of usage, high accuracy and surface quality is expected in micro products. This makes the surface roughness and burr formation critical for micromilling process. On the other hand, since micro tools with very small radii are more fragile than conventional milling tools, workpiece-tool contact and cutting forces have also a remarkable importance. Due to the same reason, tool costs and tool life are another significant point. All of these conditions make pre analysis and the parameter selection of micro end milling process essential for micromilling. The main aim of this research is to determine micromilling parameters and conditions for improved productivity and part quality by considering multiple constraints and objectives at a time, unlike the previous studies on micro end milling process optimization which are limited and focus to optimize one objective at a time. The objectives in this study are the production time and production cost. Production time involves the actual cutting time, which involves the calculation of material removal rate (MRR), tool idling and changing time and time spent without any cutting. In production cost, tool costs, which is related with the tool life, machine idling costs, labour and overhead costs are included. Production time and cost are minimized respecting certain values of cutting forces, burr size and surface quality constraints. The effects of parameters; cutting speed, feed rate, and depth of cut on these objectives and constraints were investigated. For the first time, parameter selection in micro end milling process is done through multi-objective optimization using Particle Swarm Optimization method. Optimal process parameters are proposed for minimum process cost and minimum process time

    Adaptability of metabolic networks in evolution and disease

    Get PDF
    There are 114.101 small molecule metabolites currently annotated in the Human Metabolome Database, which are highly connected amongst each other, with a few metabolites exhibiting an estimated number of more than 103 connections. Redundancy and plasticity are essential features of metabolic networks enabling cells to respond to fluctuating environments, presence of toxic molecules, or genetic perturbations like mutations. These system-level properties are inevitably linked to all aspects of biological systems ensuring cell viability by enabling processes like adaption and differentiation. To this end, the ability to interrogate molecular changes at omics level has opened new opportunities to study the cell at its different layers from the epigenome and transcriptome to its proteome and metabolome. In this thesis, I tackled the question how redundancy and plasticity shape adaptation in metabolic networks in evolutionary and disease contexts. I utilize a multi-omics approach to study comprehensively the metabolic state of a cell and its regulation at the transcriptional and proteomic level. One of the challenges with multi-omics approaches is the integration and interpretation of multi-layered data sets. To approach this challenge, I use genome scale metabolic models as a knowledge-based scaffold to overlay omics data and thereby to enable biological interpretation beyond statistical correlation. This integrative methodology has been applied to two different projects, namely the evolutionary adaptation towards a nutrient source in yeast and the metabolic adaptations following disease progression. For the latter, I also curated a current human genome-scale metabolic model and made it more suitable for flux predictions. In the yeast case study, I investigate the metabolic network adaptations enabling yeast to grow on an alternative carbon source – glycerol. I could show that network redundancy is one of the key features of fast adaptation of the yeast metabolic network to the new nutrient environment. Genomics, transcriptomics, proteomics, metabolomics and metabolic modeling together revealed a shift of the organism’s redox-balance under glycerol consumption as a driving force of adaption, which can be linked to the causal mutation in the enzyme Kgd1. On the other hand, the limitations of metabolic network adaptation also became apparent since all evolved and adapted strains exhibited metabolic trade-offs in other environmental conditions than the adaptation niche. Either an impaired diauxic shift (as in the case of the glycerol mutant) or an increased sensitivity towards osmotic stress (caused by mutations in the HOG pathway) was coupled with efficient use of glycerol. In the second project, the molecular phenotype of regressed breast cancer cells was studied to identify what differentiates these cells from healthy breast tissue and to characterize the potential source of tumor recurrence. Using a breast cancer mouse model with inducible oncogenes, transcriptomics together with an extensive set of different types of metabolomics (targeted and untargeted metabolomics, lipidomics and fluxomics) could show that regressed cancer cells, albeit their apparently normal morphology, possess a highly altered molecular phenotype with an oncogenic memory. While in cancer redundancy and plasticity enable the adaptation towards a proliferative state, in regressed cells, on the contrary, prolonged oncogenic signaling leads to a loss of metabolic network regulation and the entering of an irreversible metabolic state. This state appears to be insensitive to adaptation mechanisms as transcripts and metabolites reciprocally enhance each other to maintain the tumor-like metabolic phenotype. In conclusion, this work demonstrates how genome scale metabolic models can help identifying functional mechanisms from complex and multi-layered omics data. Appropriate genome scale metabolic models combined with metabolite measurements have proven particularly useful in this context. The comprehensive understanding of all integrated aspects of a cell’s physiology is a challenging endeavor and the results of this thesis might stimulate further research towards this goal
    corecore