582 research outputs found

    Improved dynamical particle swarm optimization method for structural dynamics

    Get PDF
    A methodology to the multiobjective structural design of buildings based on an improved particle swarm optimization algorithm is presented, which has proved to be very efficient and robust in nonlinear problems and when the optimization objectives are in conflict. In particular, the behaviour of the particle swarm optimization (PSO) classical algorithm is improved by dynamically adding autoadaptive mechanisms that enhance the exploration/exploitation trade-off and diversity of the proposed algorithm, avoiding getting trapped in local minima. A novel integrated optimization system was developed, called DI-PSO, to solve this problem which is able to control and even improve the structural behaviour under seismic excitations. In order to demonstrate the effectiveness of the proposed approach, the methodology is tested against some benchmark problems. Then a 3-story-building model is optimized under different objective cases, concluding that the improved multiobjective optimization methodology using DI-PSO is more efficient as compared with those designs obtained using single optimization.Peer ReviewedPostprint (published version

    Genetic and Swarm Algorithms for Optimizing the Control of Building HVAC Systems Using Real Data: A Comparative Study.

    Get PDF
    Buildings consume a considerable amount of electrical energy, the Heating, Ventilation, and Air Conditioning (HVAC) system being the most demanding. Saving energy and maintaining comfort still challenge scientists as they conflict. The control of HVAC systems can be improved by modeling their behavior, which is nonlinear, complex, and dynamic and works in uncertain contexts. Scientific literature shows that Soft Computing techniques require fewer computing resources but at the expense of some controlled accuracy loss. Metaheuristics-search-based algorithms show positive results, although further research will be necessary to resolve new challenging multi-objective optimization problems. This article compares the performance of selected genetic and swarmintelligence- based algorithms with the aim of discerning their capabilities in the field of smart buildings. MOGA, NSGA-II/III, OMOPSO, SMPSO, and Random Search, as benchmarking, are compared in hypervolume, generational distance, ε-indicator, and execution time. Real data from the Building Management System of Teatro Real de Madrid have been used to train a data model used for the multiple objective calculations. The novelty brought by the analysis of the different proposed dynamic optimization algorithms in the transient time of an HVAC system also includes the addition, to the conventional optimization objectives of comfort and energy efficiency, of the coefficient of performance, and of the rate of change in ambient temperature, aiming to extend the equipment lifecycle and minimize the overshooting effect when passing to the steady state. The optimization works impressively well in energy savings, although the results must be balanced with other real considerations, such as realistic constraints on chillers’ operational capacity. The intuitive visualization of the performance of the two families of algorithms in a real multi-HVAC system increases the novelty of this proposal.post-print888 K

    On the Complexities of the Design of Water Distribution Networks

    Full text link
    Water supply is one of the most recognizable and important public services contributing to quality of life. Water distribution networks WDNs are extremely complex assets. A number of complex tasks, such as design, planning, operation, maintenance, and management, are inherently associated with such networks. In this paper, we focus on the design of a WDN, which is a wide and open problem in hydraulic engineering. This problem is a large-scale combinatorial, nonlinear, nonconvex, multiobjective optimization problem, involving various types of decision variables and many complex implicit constraints. To handle this problem, we provide a synergetic association between swarm intelligence and multiagent systems where human interaction is also enabled. This results in a powerful collaborative system for finding solutions to such a complex hydraulic engineering problem. All the ingredients have been integrated into a software tool that has also been shown to efficiently solve problems from other engineering fields.This work has been developed with the support of the project IDAWAS, DPI2009-11591, of the Direccion General de Investigacion of the Ministerio de Educacion y Ciencia, and ACOMP/2010/146 of the Conselleria d'Educacio of the Generalitat Valenciana. The first author is also indebted to the Universitat Politecnica de Valencia for the sabbatical leave granted during the first semester of 2011. The use of English in this paper was revised by John Rawlins.Izquierdo Sebastián, J.; Montalvo Arango, I.; Pérez García, R.; Matías, A. (2012). On the Complexities of the Design of Water Distribution Networks. Mathematical Problems in Engineering. 2012:1-25. https://doi.org/10.1155/2012/9479611252012Goulter, I. C., & Coals, A. V. (1986). Quantitative Approaches to Reliability Assessment in Pipe Networks. Journal of Transportation Engineering, 112(3), 287-301. doi:10.1061/(asce)0733-947x(1986)112:3(287)Goulter, I. C., & Bouchart, F. (1990). Reliability‐Constrained Pipe Network Model. Journal of Hydraulic Engineering, 116(2), 211-229. doi:10.1061/(asce)0733-9429(1990)116:2(211)Kleiner, Y., Adams, B. J., & Rogers, J. S. (2001). Water Distribution Network Renewal Planning. Journal of Computing in Civil Engineering, 15(1), 15-26. doi:10.1061/(asce)0887-3801(2001)15:1(15)Dandy, G. C., & Engelhardt, M. O. (2006). Multi-Objective Trade-Offs between Cost and Reliability in the Replacement of Water Mains. Journal of Water Resources Planning and Management, 132(2), 79-88. doi:10.1061/(asce)0733-9496(2006)132:2(79)Izquierdo, J., Pérez, R., & Iglesias, P. L. (2004). Mathematical models and methods in the water industry. Mathematical and Computer Modelling, 39(11-12), 1353-1374. doi:10.1016/j.mcm.2004.06.012Giustolisi, O., Savic, D., & Kapelan, Z. (2008). Pressure-Driven Demand and Leakage Simulation for Water Distribution Networks. Journal of Hydraulic Engineering, 134(5), 626-635. doi:10.1061/(asce)0733-9429(2008)134:5(626)Montalvo, I., Izquierdo, J., Pérez, R., & Tung, M. M. (2008). Particle Swarm Optimization applied to the design of water supply systems. Computers & Mathematics with Applications, 56(3), 769-776. doi:10.1016/j.camwa.2008.02.006Montalvo, I., Izquierdo, J., Pérez, R., & Iglesias, P. L. (2008). A diversity-enriched variant of discrete PSO applied to the design of water distribution networks. Engineering Optimization, 40(7), 655-668. doi:10.1080/03052150802010607Montalvo, I., Izquierdo, J., Pérez-García, R., & Herrera, M. (2010). Improved performance of PSO with self-adaptive parameters for computing the optimal design of Water Supply Systems. Engineering Applications of Artificial Intelligence, 23(5), 727-735. doi:10.1016/j.engappai.2010.01.015Martínez, J. B. (2010). Cost and reliability comparison between branched and looped water supply networks. Journal of Hydroinformatics, 12(2), 150-160. doi:10.2166/hydro.2009.080Goulter, I. C. (1992). Systems Analysis in Water‐Distribution Network Design: From Theory to Practice. Journal of Water Resources Planning and Management, 118(3), 238-248. doi:10.1061/(asce)0733-9496(1992)118:3(238)Park, H., & Liebman, J. C. (1993). Redundancy‐Constrained Minimum‐Cost Design of Water‐Distribution Nets. Journal of Water Resources Planning and Management, 119(1), 83-98. doi:10.1061/(asce)0733-9496(1993)119:1(83)Khomsi, D., Walters, G. A., Thorley, A. R. D., & Ouazar, D. (1996). Reliability Tester for Water-Distribution Networks. Journal of Computing in Civil Engineering, 10(1), 10-19. doi:10.1061/(asce)0887-3801(1996)10:1(10)Tanyimboh, T. T., Tabesh, M., & Burrows, R. (2001). Appraisal of Source Head Methods for Calculating Reliability of Water Distribution Networks. Journal of Water Resources Planning and Management, 127(4), 206-213. doi:10.1061/(asce)0733-9496(2001)127:4(206)Kalungi, P., & Tanyimboh, T. T. (2003). Redundancy model for water distribution systems. Reliability Engineering & System Safety, 82(3), 275-286. doi:10.1016/s0951-8320(03)00168-6Morgan, D. R., & Goulter, I. C. (1985). Optimal urban water distribution design. Water Resources Research, 21(5), 642-652. doi:10.1029/wr021i005p00642Walters, G. A., & Knezevic, J. (1989). Discussion of « Reliability‐Based Optimization Model for Water Distribution Systems » by Yu‐Chun Su, Larry W. Mays, Ning Duan, and Kevin E. Lansey (December, 1987, Vol. 113, No. 12). Journal of Hydraulic Engineering, 115(8), 1157-1158. doi:10.1061/(asce)0733-9429(1989)115:8(1157)LOGANATHAN, G. V., SHERALI, H. D., & SHAH, M. P. (1990). A TWO-PHASE NETWORK DESIGN HEURISTIC FOR MINIMUM COST WATER DISTRIBUTION SYSTEMS UNDER A RELIABILITY CONSTRAINT. Engineering Optimization, 15(4), 311-336. doi:10.1080/03052159008941160Bouchart, F., & Goulter, I. (1991). Reliability Improvements in Design of Water Distribution Networks Recognizing Valve Location. Water Resources Research, 27(12), 3029-3040. doi:10.1029/91wr00590Gupta, R., & Bhave, P. R. (1994). Reliability Analysis of Water‐Distribution Systems. Journal of Environmental Engineering, 120(2), 447-461. doi:10.1061/(asce)0733-9372(1994)120:2(447)Xu, C., & Goulter, I. C. (1999). Reliability-Based Optimal Design of Water Distribution Networks. Journal of Water Resources Planning and Management, 125(6), 352-362. doi:10.1061/(asce)0733-9496(1999)125:6(352)Su, Y., Mays, L. W., Duan, N., & Lansey, K. E. (1987). Reliability‐Based Optimization Model for Water Distribution Systems. Journal of Hydraulic Engineering, 113(12), 1539-1556. doi:10.1061/(asce)0733-9429(1987)113:12(1539)Cullinane, M. J., Lansey, K. E., & Mays, L. W. (1992). Optimization‐Availability‐Based Design of Water‐Distribution Networks. Journal of Hydraulic Engineering, 118(3), 420-441. doi:10.1061/(asce)0733-9429(1992)118:3(420)Vamvakeridou-Lyroudia, L. S., Walters, G. A., & Savic, D. A. (2005). Fuzzy Multiobjective Optimization of Water Distribution Networks. Journal of Water Resources Planning and Management, 131(6), 467-476. doi:10.1061/(asce)0733-9496(2005)131:6(467)Montalvo, I., Izquierdo, J., Schwarze, S., & Pérez-García, R. (2010). Multi-objective particle swarm optimization applied to water distribution systems design: An approach with human interaction. Mathematical and Computer Modelling, 52(7-8), 1219-1227. doi:10.1016/j.mcm.2010.02.017Izquierdo, J., Montalvo, I., Pérez, R., & Fuertes, V. S. (2008). Design optimization of wastewater collection networks by PSO. Computers & Mathematics with Applications, 56(3), 777-784. doi:10.1016/j.camwa.2008.02.007Dong, Y., Tang, J., Xu, B., & Wang, D. (2005). An application of swarm optimization to nonlinear programming. Computers & Mathematics with Applications, 49(11-12), 1655-1668. doi:10.1016/j.camwa.2005.02.006Jin, Y.-X., Cheng, H.-Z., Yan, J., & Zhang, L. (2007). New discrete method for particle swarm optimization and its application in transmission network expansion planning. Electric Power Systems Research, 77(3-4), 227-233. doi:10.1016/j.epsr.2006.02.016Arumugam, M. S., & Rao, M. V. C. (2008). On the improved performances of the particle swarm optimization algorithms with adaptive parameters, cross-over operators and root mean square (RMS) variants for computing optimal control of a class of hybrid systems. Applied Soft Computing, 8(1), 324-336. doi:10.1016/j.asoc.2007.01.010Izquierdo, J., Montalvo, I., Pérez, R., & Fuertes, V. S. (2009). Forecasting pedestrian evacuation times by using swarm intelligence. Physica A: Statistical Mechanics and its Applications, 388(7), 1213-1220. doi:10.1016/j.physa.2008.12.008Herrera, M., Izquierdo, J., Montalvo, I., García-Armengol, J., & Roig, J. V. (2009). Identification of surgical practice patterns using evolutionary cluster analysis. Mathematical and Computer Modelling, 50(5-6), 705-712. doi:10.1016/j.mcm.2008.12.026Molina, J., Santana, L. V., Hernández-Díaz, A. G., Coello Coello, C. A., & Caballero, R. (2009). g-dominance: Reference point based dominance for multiobjective metaheuristics. European Journal of Operational Research, 197(2), 685-692. doi:10.1016/j.ejor.2008.07.01510.1029/89WR02879. (2010). Water Resources Research. doi:10.1029/89wr02879Savic, D. A., & Walters, G. A. (1997). Genetic Algorithms for Least-Cost Design of Water Distribution Networks. Journal of Water Resources Planning and Management, 123(2), 67-77. doi:10.1061/(asce)0733-9496(1997)123:2(67)Zecchin, A. C., Simpson, A. R., Maier, H. R., & Nixon, J. B. (2005). Parametric Study for an Ant Algorithm Applied to Water Distribution System Optimization. IEEE Transactions on Evolutionary Computation, 9(2), 175-191. doi:10.1109/tevc.2005.844168Yurong Liu, Zidong Wang, Jinling Liang, & Xiaohui Liu. (2009). Stability and Synchronization of Discrete-Time Markovian Jumping Neural Networks With Mixed Mode-Dependent Time Delays. IEEE Transactions on Neural Networks, 20(7), 1102-1116. doi:10.1109/tnn.2009.2016210Jinling Liang, Zidong Wang, & Xiaohui Liu. (2009). State Estimation for Coupled Uncertain Stochastic Networks With Missing Measurements and Time-Varying Delays: The Discrete-Time Case. IEEE Transactions on Neural Networks, 20(5), 781-793. doi:10.1109/tnn.2009.2013240Zidong Wang, Yao Wang, & Yurong Liu. (2010). Global Synchronization for Discrete-Time Stochastic Complex Networks With Randomly Occurred Nonlinearities and Mixed Time Delays. IEEE Transactions on Neural Networks, 21(1), 11-25. doi:10.1109/tnn.2009.2033599Bo Shen, Zidong Wang, & Xiaohui Liu. (2011). Bounded HH_{\infty} Synchronization and State Estimation for Discrete Time-Varying Stochastic Complex Networks Over a Finite Horizon. IEEE Transactions on Neural Networks, 22(1), 145-157. doi:10.1109/tnn.2010.209066

    Bio-inspired optimization algorithms for multi-objective problems

    Get PDF
    Orientador : Aurora Trinidad Ramirez PozoCoorientador : Roberto Santana HermidaTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 06/03/2017Inclui referências : f. 161-72Área de concentração : Computer ScienceResumo: Problemas multi-objetivo (MOPs) são caracterizados por terem duas ou mais funções objetivo a serem otimizadas simultaneamente. Nestes problemas, a meta é encontrar um conjunto de soluções não-dominadas geralmente chamado conjunto ótimo de Pareto cuja imagem no espaço de objetivos é chamada frente de Pareto. MOPs que apresentam mais de três funções objetivo a serem otimizadas são conhecidos como problemas com muitos objetivos (MaOPs) e vários estudos indicam que a capacidade de busca de algoritmos baseados em Pareto é severamente deteriorada nesses problemas. O desenvolvimento de otimizadores bio-inspirados para enfrentar MOPs e MaOPs é uma área que vem ganhando atenção na comunidade, no entanto, existem muitas oportunidades para inovar. O algoritmo de enxames de partículas multi-objetivo (MOPSO) é um dos algoritmos bio-inspirados adequados para ser modificado e melhorado, principalmente devido à sua simplicidade, flexibilidade e bons resultados. Para melhorar a capacidade de busca de MOPSOs, seguimos duas linhas de pesquisa diferentes: A primeira foca em métodos de líder e arquivamento. Trabalhos anteriores apontaram que esses componentes podem influenciar no desempenho do algoritmo, porém a seleção desses componentes pode ser dependente do problema. Uma alternativa para selecioná-los dinamicamente é empregando hiper-heurísticas. Ao combinar hiper-heurísticas e MOPSO, desenvolvemos um novo framework chamado H-MOPSO. A segunda linha de pesquisa também é baseada em trabalhos anteriores do grupo que focam em múltiplos enxames. Isso é feito selecionando como base o framework multi-enxame iterado (I-Multi), cujo procedimento de busca pode ser dividido em busca de diversidade e busca com múltiplos enxames, e a última usa agrupamento para dividir um enxame em vários sub-enxames. Para melhorar o desempenho do I-Multi, exploramos duas possibilidades: a primeira foi investigar o efeito de diferentes características do mecanismo de agrupamento do I-Multi. A segunda foi investigar alternativas para melhorar a convergência de cada sub-enxame, como hibridizá-lo com um algoritmo de estimativa de distribuição (EDA). Este trabalho com EDA aumentou nosso interesse nesta abordagem, portanto seguimos outra linha de pesquisa, investigando alternativas para criar versões multi-objetivo de um dos EDAs mais poderosos da literatura, chamado estratégia de evolução baseada na adaptação da matriz de covariância (CMA-ES). Para validar o nosso trabalho, vários estudos empíricos foram conduzidos para investigar a capacidade de busca das abordagens propostas. Em todos os estudos, nossos algoritmos investigados alcançaram resultados competitivos ou melhores do que algoritmos bem estabelecidos da literatura. Palavras-chave: multi-objetivo, algoritmo de estimativa de distribuição, otimização por enxame de partículas, multiplos enxames, híper-heuristicas.Abstract: Multi-Objective Problems (MOPs) are characterized by having two or more objective functions to be simultaneously optimized. In these problems, the goal is to find a set of non-dominated solutions usually called Pareto optimal set whose image in the objective space is called Pareto front. MOPs presenting more than three objective functions to be optimized are known as Many-Objective Problems (MaOPs) and several studies indicate that the search ability of Pareto-based algorithms is severely deteriorated in such problems. The development of bio-inspired optimizers to tackle MOPs and MaOPs is a field that has been gaining attention in the community, however there are many opportunities to innovate. Multi-objective Particle Swarm Optimization (MOPSO) is one of the bio-inspired algorithms suitable to be modified and improved, mostly due to its simplicity, flexibility and good results. To enhance the search ability of MOPSOs, we followed two different research lines: The first focus on leader and archiving methods. Previous works have pointed that these components can influence the algorithm performance, however the selection of these components can be problem-dependent. An alternative to dynamically select them is by employing hyper-heuristics. By combining hyper-heuristics and MOPSO, we developed a new framework called H-MOPSO. The second research line, is also based on previous works of the group that focus on multi-swarm. This is done by selecting as base framework the iterated multi swarm (I-Multi) algorithm, whose search procedure can be divided into diversity and multi-swarm searches, and the latter employs clustering to split a swarm into several sub-swarms. In order to improve the performance of I-Multi, we explored two possibilities: the first was to further investigate the effect of different characteristics of the clustering mechanism of I-Multi. The second was to investigate alternatives to improve the convergence of each sub-swarm, like hybridizing it to an Estimation of Distribution Algorithm (EDA). This work on EDA increased our interest in this approach, hence we followed another research line by investigating alternatives to create multi-objective versions of one of the most powerful EDAs from the literature, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). In order to validate our work, several empirical studies were conducted to investigate the search ability of the approaches proposed. In all studies, our investigated algorithms have reached competitive or better results than well established algorithms from the literature. Keywords: multi-objective, estimation of distribution algorithms, particle swarm optimization, multi-swarm, hyper-heuristics

    Multi-Criteria Performance Evaluation and Control in Power and Energy Systems

    Get PDF
    The role of intuition and human preferences are often overlooked in autonomous control of power and energy systems. However, the growing operational diversity of many systems such as microgrids, electric/hybrid-electric vehicles and maritime vessels has created a need for more flexible control and optimization methods. In order to develop such flexible control methods, the role of human decision makers and their desired performance metrics must be studied in power and energy systems. This dissertation investigates the concept of multi-criteria decision making as a gateway to integrate human decision makers and their opinions into complex mathematical control laws. There are two major steps this research takes to algorithmically integrate human preferences into control environments: MetaMetric (MM) performance benchmark: considering the interrelations of mathematical and psychological convergence, and the potential conflict of opinion between the control designer and end-user, a novel holistic performance benchmark, denoted as MM, is developed to evaluate control performance in real-time. MM uses sensor measurements and implicit human opinions to construct a unique criterion that benchmarks the system\u27s performance characteristics. MM decision support system (DSS): the concept of MM is incorporated into multi-objective evolutionary optimization algorithms as their DSS. The DSS\u27s role is to guide and sort the optimization decisions such that they reflect the best outcome desired by the human decision-maker and mathematical considerations. A diverse set of case studies including a ship power system, a terrestrial power system, and a vehicular traction system are used to validate the approaches proposed in this work. Additionally, the MM DSS is designed in a modular way such that it is not specific to any underlying evolutionary optimization algorithm

    Automated calibration of a Carbon dynamic model for lakes and reservoirs : (calibração automática de um modelo de dinâmica de Carbono em lagos e reservatórios)

    Get PDF
    Orientador : Michael MannichCoorientador : Cristóvão Vicente Scapulatempo FernandesDissertação (mestrado) - Universidade Federal do Paraná, Setor de Tecnologia, Programa de Pós-Graduação em Engenharia de Recursos Hídricos e Ambiental. Defesa: Curitiba, 16/03/2017Inclui referências e apêndicesResumo: A carência de medidas de fluxos de gases de efeito estufa (GEE), junto com as incertezas referentes às extrapolações de emissões pontuais para emissões totais, resultam em conclusões imprecisas referente a participação de reservatórios no clima global. O modelo matemático CICLAR é usado para simular fluxos de CO2 e CH4 por 45 anos no reservatório de Capivari, Paraná, Brasil. O modelo é estruturado em compartimentos de diferentes formas de carbono, como o carbono inorgânico dissolvido (CID) e o carbono orgânico particulado vivo (COPL). Processos químicos de transferência de massa entre compartimentos são modelados como reações de primeira ordem e de saturação que são controladas por parâmetros numéricos. O valor destes parâmetros são calibrados através da minimização de diferenças entre dados observados e modelados através de algoritmos de calibração. O algoritmo metaheuristico de Otimização Multi-objetivo por Enxame de Particulas Combinada de Pareto (CPMOPSO), que combina técnicas de seleção de líderes, mutações e subenxames, foi desenvolvido e aplicado como método de otimização. O algoritmo de calibração automática utiliza dados provenientes da calibração manual. Quatro cenários foram analisados: o avaliativo, que usa os primeiros 30 e os últimos 15 anos de dados do reservatório para calibrar e validar o modelo; e o retrospective, o prospectivo e o ideal, que usam 9 anos de dados, distribuídos de maneiras diferentes, para calibrar o modelo. A qualidade dos resultados da calibração foi positivamente considerada através do uso do cenário avaliativo. Os resultados da calibração sob os cenários retrospectivo e prospectivo mostraram que o algoritmo tende a superestimar emissões de metano se dados mal distribuídos são utilizados. A otimização sob o cenário ideal obteve melhores resultados e mostrou que a disposição dos dados tem maior impacto do que a quantidade sobre a calibração. Todas as soluções sob todos os cenários obtiveram soluções com coeficientes de Nash-Sutcliffe superiores a 0.95 para o período de calibração. As distribuições acumuladas das médias dos Potenciais de Aquecimento Global (GWP) mostraram que a maioria das soluções calibradas classificam o reservatório como um sumidouro de dióxido de carbono equivalente, absorvendo até 90 Gg de CO2 eq. Estimativas alternativas de estoque de carbono foram utilizadas para calibrar o modelo em um escopo em que nenhuma solução prévia é conhecida. São feitas considerações adicionas referentes a aplicação de métodos de análise de incertezas e agregação Bayesiana para melhor aferir múltiplos conjuntos de parâmetros. Palavras-chaves: Modelagem matemática. Dinâmica do carbono. Gases de efeito estufa. Potencial de aquecimento global. Enxame de partículas. Dominância de Pareto.Abstract: The low availability of measured greenhouse gas (GHG) fluxes for lakes and reservoirs, coupled with uncertainties regarding extrapolating total reservoir emissions from point measurements, result in inaccurate conclusions regarding the role of reservoirs in the global climate. The Carbon Cycle in Lakes and Reservoirs (CICLAR) model is used to study potential contributions, through carbon dioxide (CO2) and methane (CH4) emissions, of the Capivari reservoir, Brazil, since its construction in 1970. The model is structured in compartments for different carbon forms, such as dissolved inorganic carbon (DIC) and live particulate organic carbon (POCL), and model chemical processes as first order reactions controlled by numerical parameters. The values of these parameters are calibrated by minimizing differences between original and modeled data through an optimization algorithm. The Combined Pareto Multi-objective Particle Swarm Optimization (CPMOPSO) metaheuristic algorithm, which combines leader selection, mutation and subswarm techniques, is developed and successfully used as the optimization technique. The automated calibration algorithm uses data originated from the manual calibration. Four calibration scenarios are used to analyze the impact of data disposition in the calibration results: the evaluative scenario that has the initial 30 years to calibrate and the final 15 to validate the model; and the retrospective, prospective and ideal scenarios, that uses 9 years of data differently distributed. The evaluative data scenario is used to assess the quality of the calibration results, which successfully fit the validation data. The retrospective and prospective scenario are used to analyze the performance of the calibration under unevenly spread data, and the results show that the model had a bias to overestimate methane emissions. The calibration under the ideal scenario is used to show that having evenly spread data has a bigger impact on calibration results than having larger amounts of data. All calibrated solutions for all scenarios present Nash-Sutcliffe coefficient values higher than 0.95 for the calibration period. The cumulative distribution of average Global Warming Potential (GWP) indexes shows that most calibrated solutions estimated that the Capivari reservoir is a sinkhole for equivalent carbon dioxide and that it can absorb up to 90 Gg of equivalent CO2. Alternative carbon stock estimations are used to calibrate the model under a framework in which the results cannot be validated due to no previous solutions being known. Further consideration are drawn regarding the application of uncertainty analysis and Bayesian aggregation methods to better assess the combination of multiple set of parameters. Keywords: Mathematical modeling. Carbon dynamics. Greenhouse gases. Global warming potential. Particle swarm optimization. Pareto dominance

    Coverage Protocols for Wireless Sensor Networks: Review and Future Directions

    Full text link
    The coverage problem in wireless sensor networks (WSNs) can be generally defined as a measure of how effectively a network field is monitored by its sensor nodes. This problem has attracted a lot of interest over the years and as a result, many coverage protocols were proposed. In this survey, we first propose a taxonomy for classifying coverage protocols in WSNs. Then, we classify the coverage protocols into three categories (i.e. coverage aware deployment protocols, sleep scheduling protocols for flat networks, and cluster-based sleep scheduling protocols) based on the network stage where the coverage is optimized. For each category, relevant protocols are thoroughly reviewed and classified based on the adopted coverage techniques. Finally, we discuss open issues (and recommend future directions to resolve them) associated with the design of realistic coverage protocols. Issues such as realistic sensing models, realistic energy consumption models, realistic connectivity models and sensor localization are covered

    Application of nature-inspired optimization algorithms to improve the production efficiency of small and medium-sized bakeries

    Get PDF
    Increasing production efficiency through schedule optimization is one of the most influential topics in operations research that contributes to decision-making process. It is the concept of allocating tasks among available resources within the constraints of any manufacturing facility in order to minimize costs. It is carried out by a model that resembles real-world task distribution with variables and relevant constraints in order to complete a planned production. In addition to a model, an optimizer is required to assist in evaluating and improving the task allocation procedure in order to maximize overall production efficiency. The entire procedure is usually carried out on a computer, where these two distinct segments combine to form a solution framework for production planning and support decision-making in various manufacturing industries. Small and medium-sized bakeries lack access to cutting-edge tools, and most of their production schedules are based on personal experience. This makes a significant difference in production costs when compared to the large bakeries, as evidenced by their market dominance. In this study, a hybrid no-wait flow shop model is proposed to produce a production schedule based on actual data, featuring the constraints of the production environment in small and medium-sized bakeries. Several single-objective and multi-objective nature-inspired optimization algorithms were implemented to find efficient production schedules. While makespan is the most widely used quality criterion of production efficiency because it dominates production costs, high oven idle time in bakeries also wastes energy. Combining these quality criteria allows for additional cost reduction due to energy savings as well as shorter production time. Therefore, to obtain the efficient production plan, makespan and oven idle time were included in the objectives of optimization. To find the optimal production planning for an existing production line, particle swarm optimization, simulated annealing, and the Nawaz-Enscore-Ham algorithms were used. The weighting factor method was used to combine two objectives into a single objective. The classical optimization algorithms were found to be good enough at finding optimal schedules in a reasonable amount of time, reducing makespan by 29 % and oven idle time by 8 % of one of the analyzed production datasets. Nonetheless, the algorithms convergence was found to be poor, with a lower probability of obtaining the best or nearly the best result. In contrast, a modified particle swarm optimization (MPSO) proposed in this study demonstrated significant improvement in convergence with a higher probability of obtaining better results. To obtain trade-offs between two objectives, state-of-the-art multi-objective optimization algorithms, non-dominated sorting genetic algorithm (NSGA-II), strength Pareto evolutionary algorithm, generalized differential evolution, improved multi-objective particle swarm optimization (OMOPSO) and speed-constrained multi-objective particle swarm optimization (SMPSO) were implemented. Optimization algorithms provided efficient production planning with up to a 12 % reduction in makespan and a 26 % reduction in oven idle time based on data from different production days. The performance comparison revealed a significant difference between these multi-objective optimization algorithms, with NSGA-II performing best and OMOPSO and SMPSO performing worst. Proofing is a key processing stage that contributes to the quality of the final product by developing flavor and fluffiness texture in bread. However, the duration of proofing is uncertain due to the complex interaction of multiple parameters: yeast condition, temperature in the proofing chamber, and chemical composition of flour. Due to the uncertainty of proofing time, a production plan optimized with the shortest makespan can be significantly inefficient. The computational results show that the schedules with the shortest and nearly shortest makespan have a significant (up to 18 %) increase in makespan due to proofing time deviation from expected duration. In this thesis, a method for developing resilient production planning that takes into account uncertain proofing time is proposed, so that even if the deviation in proofing time is extreme, the fluctuation in makespan is minimal. The experimental results with a production dataset revealed a proactive production plan, with only 5 minutes longer than the shortest makespan, but only 21 min fluctuating in makespan due to varying the proofing time from -10 % to +10 % of actual proofing time. This study proposed a common framework for small and medium-sized bakeries to improve their production efficiency in three steps: collecting production data, simulating production planning with the hybrid no-wait flow shop model, and running the optimization algorithm. The study suggests to use MPSO for solving single objective optimization problem and NSGA-II for multi-objective optimization problem. Based on real bakery production data, the results revealed that existing plans were significantly inefficient and could be optimized in a reasonable computational time using a robust optimization algorithm. Implementing such a framework in small and medium-sized bakery manufacturing operations could help to achieve an efficient and resilient production system.Die Steigerung der Produktionseffizienz durch die Optimierung von Arbeitsplänen ist eines der am meisten erforschten Themen im Bereich der Unternehmensplanung, die zur Entscheidungsfindung beiträgt. Es handelt sich dabei um die Aufteilung von Aufgaben auf die verfügbaren Ressourcen innerhalb der Beschränkungen einer Produktionsanlage mit dem Ziel der Kostenminimierung. Diese Optimierung von Arbeitsplänen wird mit Hilfe eines Modells durchgeführt, das die Aufgabenverteilung in der realen Welt mit Variablen und relevanten Einschränkungen nachbildet, um die Produktion zu simulieren. Zusätzlich zu einem Modell sind Optimierungsverfahren erforderlich, die bei der Bewertung und Verbesserung der Aufgabenverteilung helfen, um eine effiziente Gesamtproduktion zu erzielen. Das gesamte Verfahren wird in der Regel auf einem Computer durchgeführt, wobei diese beiden unterschiedlichen Komponenten (Modell und Optimierungsverfahren) zusammen einen Lösungsrahmen für die Produktionsplanung bilden und die Entscheidungsfindung in verschiedenen Fertigungsindustrien unterstützen. Kleine und mittelgroße Bäckereien haben zumeist keinen Zugang zu den modernsten Werkzeugen und die meisten ihrer Produktionspläne beruhen auf persönlichen Erfahrungen. Dies macht einen erheblichen Unterschied bei den Produktionskosten im Vergleich zu den großen Bäckereien aus, was sich in deren Marktdominanz widerspiegelt. In dieser Studie wird ein hybrides No-Wait-Flow-Shop-Modell vorgeschlagen, um einen Produktionsplan auf der Grundlage tatsächlicher Daten zu erstellen, der die Beschränkungen der Produktionsumgebung in kleinen und mittleren Bäckereien berücksichtigt. Mehrere einzel- und mehrzielorientierte, von der Natur inspirierte Optimierungsalgorithmen wurden implementiert, um effiziente Produktionspläne zu berechnen. Die Minimierung der Produktionsdauer ist das am häufigsten verwendete Qualitätskriterium für die Produktionseffizienz, da sie die Produktionskosten dominiert. Jedoch wird in Bäckereien durch hohe Leerlaufzeiten der Öfen Energie verschwendet was wiederum die Produktionskosten erhöht. Die Kombination beider Qualitätskriterien (minimale Produktionskosten, minimale Leerlaufzeiten der Öfen) ermöglicht eine zusätzliche Kostenreduzierung durch Energieeinsparungen und kurze Produktionszeiten. Um einen effizienten Produktionsplan zu erhalten, wurden daher die Minimierung der Produktionsdauer und der Ofenleerlaufzeit in die Optimierungsziele einbezogen. Um optimale Produktionspläne für bestehende Produktionsprozesse von Bäckereien zu ermitteln, wurden folgende Algorithmen untersucht: Particle Swarm Optimization, Simulated Annealing und Nawaz-Enscore-Ham. Die Methode der Gewichtung wurde verwendet, um zwei Ziele zu einem einzigen Ziel zu kombinieren. Die Optimierungsalgorithmen erwiesen sich als gut genug, um in angemessener Zeit optimale Pläne zu berechnen, wobei bei einem untersuchten Datensatz die Produktionsdauer um 29 % und die Leerlaufzeit des Ofens um 8 % reduziert wurde. Allerdings erwies sich die Konvergenz der Algorithmen als unzureichend, da nur mit einer geringen Wahrscheinlichkeit das beste oder nahezu beste Ergebnis berechnet wurde. Im Gegensatz dazu zeigte der in dieser Studie ebenfalls untersuchte modifizierte Particle-swarm-Optimierungsalgorithmus (mPSO) eine deutliche Verbesserung der Konvergenz mit einer höheren Wahrscheinlichkeit, bessere Ergebnisse zu erzielen im Vergleich zu den anderen Algorithmen. Um Kompromisse zwischen zwei Zielen zu erzielen, wurden moderne Algorithmen zur Mehrzieloptimierung implementiert: Non-dominated Sorting Genetic Algorithm (NSGA-II), Strength Pareto Evolutionary Algorithm, Generalized Differential Evolution, Improved Multi-objective Particle Swarm Optimization (OMOPSO), and Speed-constrained Multi-objective Particle Swarm Optimization (SMPSO). Die Optimierungsalgorithmen ermöglichten eine effiziente Produktionsplanung mit einer Verringerung der Produktionsdauer um bis zu 12 % und einer Verringerung der Leerlaufzeit der Öfen um 26 % auf der Grundlage von Daten aus unterschiedlichen Produktionsprozessen. Der Leistungsvergleich zeigte signifikante Unterschiede zwischen diesen Mehrziel-Optimierungsalgorithmen, wobei NSGA-II am besten und OMOPSO und SMPSO am schlechtesten abschnitten. Die Gärung ist ein wichtiger Verarbeitungsschritt, der zur Qualität des Endprodukts beiträgt, indem der Geschmack und die Textur des Brotes positiv beeinflusst werden kann. Die Dauer der Gärung ist jedoch aufgrund der komplexen Interaktion von mehreren Größen abhängig wie der Hefezustand, der Temperatur in der Gärkammer und der chemischen Zusammensetzung des Mehls. Aufgrund der Variabilität der Gärzeit kann jedoch ein Produktionsplan, der auf die kürzeste Produktionszeit optimiert ist, sehr ineffizient sein. Die Berechnungsergebnisse zeigen, dass die Pläne mit der kürzesten und nahezu kürzesten Produktionsdauer eine erhebliche (bis zu 18 %) Erhöhung der Produktionsdauer aufgrund der Abweichung der Gärzeit von der erwarteten Dauer aufweisen. In dieser Arbeit wird eine Methode zur Entwicklung einer robusten Produktionsplanung vorgeschlagen, die Veränderungen in den Gärzeiten berücksichtigt, so dass selbst bei einer extremen Abweichung der Gärzeit die Schwankung der Produktionsdauer minimal ist. Die experimentellen Ergebnisse für einen Produktionsprozess ergaben einen robusten Produktionsplan, der nur 5 Minuten länger ist als die kürzeste Produktionsdauer, aber nur 21 Minuten in der Produktionsdauer schwankt, wenn die Gärzeit von -10 % bis +10 % der ermittelten Gärzeit variiert. In dieser Studie wird ein Vorgehen für kleine und mittlere Bäckereien vorgeschlagen, um ihre Produktionseffizienz in drei Schritten zu verbessern: Erfassung von Produktionsdaten, Simulation von Produktionsplänen mit dem hybrid No-Wait Flow Shop Modell und Ausführung der Optimierung. Für die Einzieloptimierung wird der mPSO-Algorithmus und für die Mehrzieloptimierung NSGA-II-Algorithmus empfohlen. Auf der Grundlage realer Bäckereiproduktionsdaten zeigten die Ergebnisse, dass die in den Bäckereien verwendeten Pläne ineffizient waren und mit Hilfe eines effizienten Optimierungsalgorithmus in einer angemessenen Rechenzeit optimiert werden konnten. Die Umsetzung eines solchen Vorgehens in kleinen und mittelgroßen Bäckereibetrieben trägt dazu bei effiziente und robuste Produktionspläne zu erstellen und somit die Wettbewerbsfähigkeit dieser Bäckereien zu erhöhen
    corecore