1,132 research outputs found
Genetic and Swarm Algorithms for Optimizing the Control of Building HVAC Systems Using Real Data: A Comparative Study.
Buildings consume a considerable amount of electrical energy, the Heating, Ventilation,
and Air Conditioning (HVAC) system being the most demanding. Saving energy and maintaining
comfort still challenge scientists as they conflict. The control of HVAC systems can be improved by
modeling their behavior, which is nonlinear, complex, and dynamic and works in uncertain contexts.
Scientific literature shows that Soft Computing techniques require fewer computing resources
but at the expense of some controlled accuracy loss. Metaheuristics-search-based algorithms show
positive results, although further research will be necessary to resolve new challenging multi-objective
optimization problems. This article compares the performance of selected genetic and swarmintelligence-
based algorithms with the aim of discerning their capabilities in the field of smart buildings.
MOGA, NSGA-II/III, OMOPSO, SMPSO, and Random Search, as benchmarking, are compared
in hypervolume, generational distance, Δ-indicator, and execution time. Real data from the Building
Management System of Teatro Real de Madrid have been used to train a data model used for the
multiple objective calculations. The novelty brought by the analysis of the different proposed dynamic
optimization algorithms in the transient time of an HVAC system also includes the addition,
to the conventional optimization objectives of comfort and energy efficiency, of the coefficient of
performance, and of the rate of change in ambient temperature, aiming to extend the equipment
lifecycle and minimize the overshooting effect when passing to the steady state. The optimization
works impressively well in energy savings, although the results must be balanced with other real
considerations, such as realistic constraints on chillersâ operational capacity. The intuitive visualization
of the performance of the two families of algorithms in a real multi-HVAC system increases
the novelty of this proposal.post-print888 K
A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications
Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms
Global Localization based on Evolutionary Optimization Algorithms for Indoor and Underground Environments
MenciĂłn Internacional en el tĂtulo de doctorA fully autonomous robot is defined by its capability to sense, understand and move
within the environment to perform a specific task. These qualities are included within
the concept of navigation. However, among them, a basic transcendent one is localization,
the capacity of the system to know its position regarding its surroundings.
Therefore, the localization issue could be defined as searching the robotâs coordinates
and rotation angles within a known environment. In this thesis, the particular case
of Global Localization is addressed, when no information about the initial position
is known, and the robot relies only on its sensors. This work aims to develop several
tools that allow the system to locate in the two most usual geometric map representations:
occupancy maps and Point Clouds. The former divides the dimensional
space into equally-sized cells coded with a binary value distinguishing between free
and occupied space. Point Clouds define obstacles and environment features as a
sparse set of points in the space, commonly measured through a laser sensor.
In this work, various algorithms are presented to search for that position through
laser measurements only, in contrast with more usual methods that combine external
information with motion information of the robot, odometry. Therefore, the system
is capable of finding its own position in indoor environments, with no necessity of
external positioning and without the influence of the uncertainty that motion sensors
typically induce. Our solution is addressed by implementing various stochastic optimization
algorithms or Meta-heuristics, specifically those bio-inspired or commonly
known as Evolutionary Algorithms. Inspired by natural phenomena, these algorithms
are based on the evolution of a series of particles or population members towards a
solution through the optimization of a cost or fitness function that defines the problem.
The implemented algorithms are Differential Evolution, Particle Swarm Optimization,
and Invasive Weed Optimization, which try to mimic the behavior of evolution
through mutation, the movement of swarms or flocks of animals, and the colonizing
behavior of invasive species of plants respectively. The different implementations
address the necessity to parameterize these algorithms for a wide search space as
a complete three-dimensional map, with exploratory behavior and the convergence
conditions that terminate the search. The process is a recursive optimum estimation search, so the solution is unknown. These implementations address the optimum
localization search procedure by comparing the laser measurements from the real position
with the one obtained from each candidate particle in the known map. The
cost function evaluates this similarity between real and estimated measurements and,
therefore, is the function that defines the problem to optimize.
The common approach in localization or mapping using laser sensors is to establish
the mean square error or the absolute error between laser measurements as an
optimization function. In this work, a different perspective is introduced by benefiting
from statistical distance or divergences, utilized to describe the similarity between
probability distributions. By modeling the laser sensor as a probability distribution
over the measured distance, the algorithm can benefit from the asymmetries provided
by these divergences to favor or penalize different situations. Hence, how the laser
scans differ and not only how much can be evaluated. The results obtained in different
maps, simulated and real, prove that the Global Localization issue is successfully
solved through these methods, both in position and orientation. The implementation
of divergence-based weighted cost functions provides great robustness and accuracy
to the localization filters and optimal response before different sources and noise levels
from sensor measurements, the environment, or the presence of obstacles that are not
registered in the map.Lo que define a un robot completamente autĂłnomo es su capacidad para percibir el entorno,
comprenderlo y poder desplazarse en ÂŽel para realizar las tareas encomendadas.
Estas cualidades se engloban dentro del concepto de la navegaciĂłn, pero entre todas
ellas la mĂĄs bĂĄsica y de la que dependen en buena parte el resto es la localizaciĂłn,
la capacidad del sistema de conocer su posiciĂłn respecto al entorno que lo rodea. De
esta forma el problema de la localizaciĂłn se podrĂa definir como la bĂșsqueda de las
coordenadas de posiciĂłn y los ĂĄngulos de orientaciĂłn de un robot mĂłvil dentro de un
entorno conocido. En esta tesis se aborda el caso particular de la localizaciĂłn global,
cuando no existe informaciĂłn inicial alguna y el sistema depende Ășnicamente de sus
sensores. El objetivo de este trabajo es el desarrollo de varias herramientas que permitan
que el sistema encuentre la localizaciĂłn en la que se encuentra respecto a los
dos tipos de mapa mĂĄs comĂșnmente utilizados para representar el entorno: los mapas
de ocupaciĂłn y las nubes de puntos. Los primeros subdividen el espacio en celdas
de igual tamaño cuyo valor se define de forma binaria entre espacio libre y ocupado.
Las nubes de puntos definen los obstĂĄculos como una serie dispersa de puntos en el
espacio comĂșnmente medidos a travĂ©s de un lĂĄser.
En este trabajo se presentan varios algoritmos para la bĂșsqueda de esa posiciĂłn utilizando Ășnicamente las medidas de este sensor lĂĄser, en contraste con los mĂ©todos mĂĄs
habituales que combinan informaciĂłn externa con informaciĂłn propia del movimiento
del robot, la odometrĂa. De esta forma el sistema es capaz de encontrar su posiciĂłn
en entornos interiores sin depender de posicionamiento externo y sin verse influenciado
por la deriva tĂpica que inducen los sensores de movimiento. La soluciĂłn se
afronta mediante la implementaciĂłn de varios tipos de algoritmos estocĂĄsticos de optimizaciĂłn o Meta-heurĂsticas, en concreto entre los denominados bio-inspirados o
comĂșnmente conocidos como Algoritmos Evolutivos. Estos algoritmos, inspirados en
varios fenĂłmenos de la naturaleza, se basan en la evoluciĂłn de una serie de partĂculas
o poblaciĂłn hacia una soluciĂłn en base a la optimizaciĂłn de una funciĂłn de coste que
define el problema.
Los algoritmos implementados en este trabajo son Differential Evolution, Particle
Swarm Optimization e Invasive Weed Optimization, que tratan de imitar el comportamiento
de la evoluciĂłn por mutaciĂłn, el movimiento de enjambres o bandas de animales y la colonizaciĂłn por parte de especies invasivas de plantas respectivamente.
Las distintas implementaciones abordan la necesidad de parametrizar estos algoritmos
para un espacio de bĂșsqueda muy amplio como es un mapa completo, con la
necesidad de que su comportamiento sea muy exploratorio, asĂ como las condiciones
de convergencia que definen el fin de la bĂșsqueda ya que al ser un proceso recursivo
de estimaciĂłn la soluciĂłn no es conocida. Estos algoritmos plantean la forma de
buscar la localizaciĂłn ÂŽoptima del robot mediante la comparaciĂłn de las medidas del
lĂĄser en la posiciĂłn real con lo esperado en la posiciĂłn de cada una de esas partĂculas
teniendo en cuenta el mapa conocido. La funciĂłn de coste evalĂșa esa semejanza entre
las medidas reales y estimadas y por tanto, es la funciĂłn que define el problema.
Las funciones tĂpicamente utilizadas tanto en mapeado como localizaciĂłn mediante
el uso de sensores lĂĄser de distancia son el error cuadrĂĄtico medio o el error
absoluto entre distancia estimada y real. En este trabajo se presenta una perspectiva
diferente, aprovechando las distancias estadĂsticas o divergencias, utilizadas para
establecer la semejanza entre distribuciones probabilĂsticas. Modelando el sensor
como una distribuciĂłn de probabilidad entorno a la medida aportada por el lĂĄser, se
puede aprovechar la asimetrĂa de esas divergencias para favorecer o penalizar distintas
situaciones. De esta forma se evalĂșa como difieren las medias y no solo cuanto. Los
resultados obtenidos en distintos mapas tanto simulados como reales demuestran que
el problema de la localización se resuelve con éxito mediante estos métodos tanto respecto
al error de estimaciĂłn de la posiciĂłn como de la orientaciĂłn del robot. El uso de
las divergencias y su implementaciĂłn en una funciĂłn de coste ponderada proporciona
gran robustez y precisiĂłn al filtro de localizaciĂłn y gran respuesta ante diferentes
fuentes y niveles de ruido, tanto de la propia medida del sensor, del ambiente y de
obstĂĄculos no modelados en el mapa del entorno.Programa de Doctorado en IngenierĂa ElĂ©ctrica, ElectrĂłnica y AutomĂĄtica por la Universidad Carlos III de MadridPresidente: Fabio Bonsignorio.- Secretario: MarĂa Dolores Blanco Rojas.- Vocal: Alberto Brunete GonzĂĄle
Current Studies and Applications of Krill Herd and Gravitational Search Algorithms in Healthcare
Nature-Inspired Computing or NIC for short is a relatively young field that
tries to discover fresh methods of computing by researching how natural
phenomena function to find solutions to complicated issues in many contexts. As
a consequence of this, ground-breaking research has been conducted in a variety
of domains, including synthetic immune functions, neural networks, the
intelligence of swarm, as well as computing of evolutionary. In the domains of
biology, physics, engineering, economics, and management, NIC techniques are
used. In real-world classification, optimization, forecasting, and clustering,
as well as engineering and science issues, meta-heuristics algorithms are
successful, efficient, and resilient. There are two active NIC patterns: the
gravitational search algorithm and the Krill herd algorithm. The study on using
the Krill Herd Algorithm (KH) and the Gravitational Search Algorithm (GSA) in
medicine and healthcare is given a worldwide and historical review in this
publication. Comprehensive surveys have been conducted on some other
nature-inspired algorithms, including KH and GSA. The various versions of the
KH and GSA algorithms and their applications in healthcare are thoroughly
reviewed in the present article. Nonetheless, no survey research on KH and GSA
in the healthcare field has been undertaken. As a result, this work conducts a
thorough review of KH and GSA to assist researchers in using them in diverse
domains or hybridizing them with other popular algorithms. It also provides an
in-depth examination of the KH and GSA in terms of application, modification,
and hybridization. It is important to note that the goal of the study is to
offer a viewpoint on GSA with KH, particularly for academics interested in
investigating the capabilities and performance of the algorithm in the
healthcare and medical domains.Comment: 35 page
Evolutionary Computation
This book presents several recent advances on Evolutionary Computation, specially evolution-based optimization methods and hybrid algorithms for several applications, from optimization and learning to pattern recognition and bioinformatics. This book also presents new algorithms based on several analogies and metafores, where one of them is based on philosophy, specifically on the philosophy of praxis and dialectics. In this book it is also presented interesting applications on bioinformatics, specially the use of particle swarms to discover gene expression patterns in DNA microarrays. Therefore, this book features representative work on the field of evolutionary computation and applied sciences. The intended audience is graduate, undergraduate, researchers, and anyone who wishes to become familiar with the latest research work on this field
Recommended from our members
Nature inspired computational intelligence for financial contagion modelling
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Financial contagion refers to a scenario in which small shocks, which initially affect only a few financial institutions or a particular region of the economy, spread to the rest of the financial sector and other countries whose economies were previously healthy. This resembles the âtransmissionâ of a medical disease. Financial contagion happens both at domestic level and international level. At domestic level, usually the failure of a domestic bank or financial intermediary triggers transmission by defaulting on inter-bank liabilities, selling assets in a fire sale, and undermining confidence in similar banks. An example of this phenomenon is the failure of Lehman Brothers and the subsequent turmoil in the US financial markets. International financial contagion happens in both advanced economies and developing economies, and is the transmission of financial crises across financial markets. Within the current globalise financial system, with large volumes of cash flow and cross-regional operations of large banks and hedge funds, financial contagion usually happens simultaneously among both domestic institutions and across countries. There is no conclusive definition of financial contagion, most research papers study contagion by analyzing the change in the variance-covariance matrix during the period of market turmoil. King and Wadhwani (1990) first test the correlations between the US, UK and Japan, during the US stock market crash of 1987. Boyer (1997) finds significant increases in correlation during financial crises, and reinforces a definition of financial contagion as a correlation changing during the crash period. Forbes and Rigobon (2002) give a definition of financial contagion. In their work, the term interdependence is used as the alternative to contagion. They claim that for the period they study, there is no contagion but only interdependence. Interdependence leads to common price movements during periods both of stability and turmoil. In the past two decades, many studies (e.g. Kaminsky et at., 1998; Kaminsky 1999) have developed early warning systems focused on the origins of financial crises rather than on financial contagion. Further authors (e.g. Forbes and Rigobon, 2002; Caporale et al, 2005), on the other hand, have focused on studying contagion or interdependence. In this thesis, an overall mechanism is proposed that simulates characteristics of propagating crisis through contagion. Within that scope, a new co-evolutionary market model is developed, where some of the technical traders change their behaviour during crisis to transform into herd traders making their decisions based on market sentiment rather than underlying strategies or factors. The thesis focuses on the transformation of market interdependence into contagion and on the contagion effects. The author first build a multi-national platform to allow different type of players to trade implementing their own rules and considering information from the domestic and a foreign market. Tradersâ strategies and the performance of the simulated domestic market are trained using historical prices on both markets, and optimizing artificial marketâs parameters through immune - particle swarm optimization techniques (I-PSO). The author also introduces a mechanism contributing to the transformation of technical into herd traders. A generalized auto-regressive conditional heteroscedasticity - copula (GARCH-copula) is further applied to calculate the tail dependence between the affected market and the origin of the crisis, and that parameter is used in the fitness function for selecting the best solutions within the evolving population of possible model parameters, and therefore in the optimization criteria for contagion simulation. The overall model is also applied in predictive mode, where the author optimize in the pre-crisis period using data from the domestic market and the crisis-origin foreign market, and predict in the crisis period using data from the foreign market and predicting the affected domestic market
Evolutionary Computation 2020
Intelligent optimization is based on the mechanism of computational intelligence to refine a suitable feature model, design an effective optimization algorithm, and then to obtain an optimal or satisfactory solution to a complex problem. Intelligent algorithms are key tools to ensure global optimization quality, fast optimization efficiency and robust optimization performance. Intelligent optimization algorithms have been studied by many researchers, leading to improvements in the performance of algorithms such as the evolutionary algorithm, whale optimization algorithm, differential evolution algorithm, and particle swarm optimization. Studies in this arena have also resulted in breakthroughs in solving complex problems including the green shop scheduling problem, the severe nonlinear problem in one-dimensional geodesic electromagnetic inversion, error and bug finding problem in software, the 0-1 backpack problem, traveler problem, and logistics distribution center siting problem. The editors are confident that this book can open a new avenue for further improvement and discoveries in the area of intelligent algorithms. The book is a valuable resource for researchers interested in understanding the principles and design of intelligent algorithms
- âŠ