173 research outputs found

    Deep learning-driven particle swarm optimisation for additive manufacturing energy optimisation

    Get PDF
    The additive manufacturing (AM) process is characterised as a high energy-consuming process, which has a significant impact on the environment and sustainability. The topic of AM energy consumption modelling, prediction, and optimisation has then become a research focus in both industry and academia. This issue involves many relevant features, such as material condition, process operation, part and process design, working environment, and so on. While existing studies reveal that AM energy consumption modelling largely depends on the design-relevant features in practice, it has not been given sufficient attention. Therefore, in this study, design-relevant features are firstly examined with respect to energy modelling. These features are typically determined by part designers and process operators before production. The AM energy consumption knowledge, hidden in the design-relevant features, is exploited for prediction modelling through a design-relevant data analytics approach. Based on the new modelling approach, a novel deep learning-driven particle swarm optimisation (DLD-PSO) method is proposed to optimise the energy utility. Deep learning is introduced to address several issues, in terms of increasing the search speed and enhancing the global best of PSO. Finally, using the design-relevant data collected from a real-world AM system in production, a case study is presented to validate the proposed modelling approach, and the results reveal its merits. Meanwhile, optimisation has also been carried out to guide part designers and process operators to revise their designs and decisions in order to reduce the energy consumption of the designated AM system under study

    Aerial base station placement in temporary-event scenarios

    Get PDF
    Die Anforderungen an den Netzdatenverkehr sind in den letzten Jahren dramatisch gestiegen, was ein großes Interesse an der Entwicklung neuartiger Lösungen zur Erhöhung der Netzkapazität in Mobilfunknetzen erzeugt hat. Besonderes Augenmerk wurde auf das Problem der Kapazitätsverbesserung bei temporären Veranstaltungen gelegt, bei denen das Umfeld im Wesentlichen dynamisch ist. Um der Dynamik der sich verändernden Umgebung gerecht zu werden und die Bodeninfrastruktur durch zusätzliche Kapazität zu unterstützen, wurde der Einsatz von Luftbasisstationen vorgeschlagen. Die Luftbasisstationen können in der Nähe des Nutzers platziert werden und aufgrund der im Vergleich zur Bodeninfrastruktur höheren Lage die Vorteile der Sichtlinienkommunikation nutzen. Dies reduziert den Pfadverlust und ermöglicht eine höhere Kanalkapazität. Das Optimierungsproblem der Maximierung der Netzkapazität durch die richtige Platzierung von Luftbasisstationen bildet einen Schwerpunkt der Arbeit. Es ist notwendig, das Optimierungsproblem rechtzeitig zu lösen, um auf Veränderungen in der dynamischen Funkumgebung zu reagieren. Die optimale Platzierung von Luftbasisstationen stellt jedoch ein NP-schweres Problem dar, wodurch die Lösung nicht trivial ist. Daher besteht ein Bedarf an schnellen und skalierbaren Optimierungsalgorithmen. Als Erstes wird ein neuartiger Hybrid-Algorithmus (Projected Clustering) vorgeschlagen, der mehrere Lösungen auf der Grundlage der schnellen entfernungsbasierten Kapazitätsapproximierung berechnet und sie auf dem genauen SINR-basierten Kapazitätsmodell bewertet. Dabei werden suboptimale Lösungen vermieden. Als Zweites wird ein neuartiges verteiltes, selbstorganisiertes Framework (AIDA) vorgeschlagen, welches nur lokales Wissen verwendet, den Netzwerkmehraufwand verringert und die Anforderungen an die Kommunikation zwischen Luftbasisstationen lockert. Bei der Formulierung des Platzierungsproblems konnte festgestellt werden, dass Unsicherheiten in Bezug auf die Modellierung der Luft-Bodensignalausbreitung bestehen. Da dieser Aspekt im Rahmen der Analyse eine wichtige Rolle spielt, erfolgte eine Validierung moderner Luft-Bodensignalausbreitungsmodelle, indem reale Messungen gesammelt und das genaueste Modell für die Simulationen ausgewählt wurden.As the traffic demands have grown dramatically in recent years, so has the interest in developing novel solutions that increase the network capacity in cellular networks. The problem of capacity improvement is even more complex when applied to a dynamic environment during a disaster or temporary event. The use of aerial base stations has received much attention in the last ten years as the solution to cope with the dynamics of the changing environment and to supplement the ground infrastructure with extra capacity. Due to higher elevations and possibility to place aerial base stations in close proximity to the user, path loss is significantly smaller in comparison to the ground infrastructure, which in turn enables high data capacity. We are studying the optimization problem of maximizing network capacity by proper placement of aerial base stations. To handle the changes in the dynamic radio environment, it is necessary to promptly solve the optimization problem. However, we show that the optimal placement of aerial base stations is the NP-hard problem and its solution is non-trivial, and thus, there is a need for fast and scalable optimization algorithms. This dissertation investigates how to solve the placement problem efficiently and to support the dynamics of temporary events. First, we propose a novel hybrid algorithm (Projected Clustering), which calculates multiple solutions based on the fast distance-based capacity approximation and evaluates them on the accurate SINR-based capacity model, avoiding sub-optimal solutions. Second, we propose a novel distributed, self-organized framework (AIDA), which conducts a decision-making process using only local knowledge, decreasing the network overhead and relaxing the requirements for communication between aerial base stations. During the formulation of the placement problem, we found that there is still considerable uncertainty with regard to air-to-ground propagation modeling. Since this aspect plays an important role in our analysis, we validated state-of-the-art air-to-ground propagation models by collecting real measurements and chose the most accurate model for the simulations

    A Cost-Effective Optimization for Scheduling of Household Appliances and Energy Resources

    Full text link
    In literature, proposed approaches mostly focused on household appliances scheduling for reducing consumers' electricity bills, peak-to-average ratio, electricity usage in peak load hours, and enhancing user comfort level. The scheduling of smart home deployed energy resources recently became a critical issue on demand side due to a higher share of renewable energy sources. In this paper, a new hybrid genetic-based harmony search (HGHS) approach has been proposed for modeling the home energy management system, which contributes to minimizing consumers' electricity bills and electricity usage during peak load hours by scheduling both household appliances and smart home deployed energy resources. We have comparatively evaluated the optimization results obtained from the proposed HGHS and other approaches. The experimental results confirmed the superiority of HGHS over genetic algorithm (GA) and harmony search algorithm (HSA). The proposed HGHS scheduling approach outperformed more efficiently than HSA and GA. The electricity usage cost for completing one-day operation of household appliances was limited to 1305.7 cents, 953.65 cents, and 569.44 cents in the proposed scheduling approach for case I, case II, and case III, respectively and was observed as lower than other approaches. The electricity consumption cost was reduced upto 23.125%, 43.87% and 66.44% in case I, case II, and case III, respectively using proposed scheduling approach as compared to an unscheduled load scenario. Moreover, the electrical peak load was limited to 3.07 kW, 2.9478 kW, and 1.9 kW during the proposed HGHS scheduling approach and was reported as lower than other approaches

    Global Localization based on Evolutionary Optimization Algorithms for Indoor and Underground Environments

    Get PDF
    Mención Internacional en el título de doctorA fully autonomous robot is defined by its capability to sense, understand and move within the environment to perform a specific task. These qualities are included within the concept of navigation. However, among them, a basic transcendent one is localization, the capacity of the system to know its position regarding its surroundings. Therefore, the localization issue could be defined as searching the robot’s coordinates and rotation angles within a known environment. In this thesis, the particular case of Global Localization is addressed, when no information about the initial position is known, and the robot relies only on its sensors. This work aims to develop several tools that allow the system to locate in the two most usual geometric map representations: occupancy maps and Point Clouds. The former divides the dimensional space into equally-sized cells coded with a binary value distinguishing between free and occupied space. Point Clouds define obstacles and environment features as a sparse set of points in the space, commonly measured through a laser sensor. In this work, various algorithms are presented to search for that position through laser measurements only, in contrast with more usual methods that combine external information with motion information of the robot, odometry. Therefore, the system is capable of finding its own position in indoor environments, with no necessity of external positioning and without the influence of the uncertainty that motion sensors typically induce. Our solution is addressed by implementing various stochastic optimization algorithms or Meta-heuristics, specifically those bio-inspired or commonly known as Evolutionary Algorithms. Inspired by natural phenomena, these algorithms are based on the evolution of a series of particles or population members towards a solution through the optimization of a cost or fitness function that defines the problem. The implemented algorithms are Differential Evolution, Particle Swarm Optimization, and Invasive Weed Optimization, which try to mimic the behavior of evolution through mutation, the movement of swarms or flocks of animals, and the colonizing behavior of invasive species of plants respectively. The different implementations address the necessity to parameterize these algorithms for a wide search space as a complete three-dimensional map, with exploratory behavior and the convergence conditions that terminate the search. The process is a recursive optimum estimation search, so the solution is unknown. These implementations address the optimum localization search procedure by comparing the laser measurements from the real position with the one obtained from each candidate particle in the known map. The cost function evaluates this similarity between real and estimated measurements and, therefore, is the function that defines the problem to optimize. The common approach in localization or mapping using laser sensors is to establish the mean square error or the absolute error between laser measurements as an optimization function. In this work, a different perspective is introduced by benefiting from statistical distance or divergences, utilized to describe the similarity between probability distributions. By modeling the laser sensor as a probability distribution over the measured distance, the algorithm can benefit from the asymmetries provided by these divergences to favor or penalize different situations. Hence, how the laser scans differ and not only how much can be evaluated. The results obtained in different maps, simulated and real, prove that the Global Localization issue is successfully solved through these methods, both in position and orientation. The implementation of divergence-based weighted cost functions provides great robustness and accuracy to the localization filters and optimal response before different sources and noise levels from sensor measurements, the environment, or the presence of obstacles that are not registered in the map.Lo que define a un robot completamente autónomo es su capacidad para percibir el entorno, comprenderlo y poder desplazarse en ´el para realizar las tareas encomendadas. Estas cualidades se engloban dentro del concepto de la navegación, pero entre todas ellas la más básica y de la que dependen en buena parte el resto es la localización, la capacidad del sistema de conocer su posición respecto al entorno que lo rodea. De esta forma el problema de la localización se podría definir como la búsqueda de las coordenadas de posición y los ángulos de orientación de un robot móvil dentro de un entorno conocido. En esta tesis se aborda el caso particular de la localización global, cuando no existe información inicial alguna y el sistema depende únicamente de sus sensores. El objetivo de este trabajo es el desarrollo de varias herramientas que permitan que el sistema encuentre la localización en la que se encuentra respecto a los dos tipos de mapa más comúnmente utilizados para representar el entorno: los mapas de ocupación y las nubes de puntos. Los primeros subdividen el espacio en celdas de igual tamaño cuyo valor se define de forma binaria entre espacio libre y ocupado. Las nubes de puntos definen los obstáculos como una serie dispersa de puntos en el espacio comúnmente medidos a través de un láser. En este trabajo se presentan varios algoritmos para la búsqueda de esa posición utilizando únicamente las medidas de este sensor láser, en contraste con los métodos más habituales que combinan información externa con información propia del movimiento del robot, la odometría. De esta forma el sistema es capaz de encontrar su posición en entornos interiores sin depender de posicionamiento externo y sin verse influenciado por la deriva típica que inducen los sensores de movimiento. La solución se afronta mediante la implementación de varios tipos de algoritmos estocásticos de optimización o Meta-heurísticas, en concreto entre los denominados bio-inspirados o comúnmente conocidos como Algoritmos Evolutivos. Estos algoritmos, inspirados en varios fenómenos de la naturaleza, se basan en la evolución de una serie de partículas o población hacia una solución en base a la optimización de una función de coste que define el problema. Los algoritmos implementados en este trabajo son Differential Evolution, Particle Swarm Optimization e Invasive Weed Optimization, que tratan de imitar el comportamiento de la evolución por mutación, el movimiento de enjambres o bandas de animales y la colonización por parte de especies invasivas de plantas respectivamente. Las distintas implementaciones abordan la necesidad de parametrizar estos algoritmos para un espacio de búsqueda muy amplio como es un mapa completo, con la necesidad de que su comportamiento sea muy exploratorio, así como las condiciones de convergencia que definen el fin de la búsqueda ya que al ser un proceso recursivo de estimación la solución no es conocida. Estos algoritmos plantean la forma de buscar la localización ´optima del robot mediante la comparación de las medidas del láser en la posición real con lo esperado en la posición de cada una de esas partículas teniendo en cuenta el mapa conocido. La función de coste evalúa esa semejanza entre las medidas reales y estimadas y por tanto, es la función que define el problema. Las funciones típicamente utilizadas tanto en mapeado como localización mediante el uso de sensores láser de distancia son el error cuadrático medio o el error absoluto entre distancia estimada y real. En este trabajo se presenta una perspectiva diferente, aprovechando las distancias estadísticas o divergencias, utilizadas para establecer la semejanza entre distribuciones probabilísticas. Modelando el sensor como una distribución de probabilidad entorno a la medida aportada por el láser, se puede aprovechar la asimetría de esas divergencias para favorecer o penalizar distintas situaciones. De esta forma se evalúa como difieren las medias y no solo cuanto. Los resultados obtenidos en distintos mapas tanto simulados como reales demuestran que el problema de la localización se resuelve con éxito mediante estos métodos tanto respecto al error de estimación de la posición como de la orientación del robot. El uso de las divergencias y su implementación en una función de coste ponderada proporciona gran robustez y precisión al filtro de localización y gran respuesta ante diferentes fuentes y niveles de ruido, tanto de la propia medida del sensor, del ambiente y de obstáculos no modelados en el mapa del entorno.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Fabio Bonsignorio.- Secretario: María Dolores Blanco Rojas.- Vocal: Alberto Brunete Gonzále

    Soft computing applied to optimization, computer vision and medicine

    Get PDF
    Artificial intelligence has permeated almost every area of life in modern society, and its significance continues to grow. As a result, in recent years, Soft Computing has emerged as a powerful set of methodologies that propose innovative and robust solutions to a variety of complex problems. Soft Computing methods, because of their broad range of application, have the potential to significantly improve human living conditions. The motivation for the present research emerged from this background and possibility. This research aims to accomplish two main objectives: On the one hand, it endeavors to bridge the gap between Soft Computing techniques and their application to intricate problems. On the other hand, it explores the hypothetical benefits of Soft Computing methodologies as novel effective tools for such problems. This thesis synthesizes the results of extensive research on Soft Computing methods and their applications to optimization, Computer Vision, and medicine. This work is composed of several individual projects, which employ classical and new optimization algorithms. The manuscript presented here intends to provide an overview of the different aspects of Soft Computing methods in order to enable the reader to reach a global understanding of the field. Therefore, this document is assembled as a monograph that summarizes the outcomes of these projects across 12 chapters. The chapters are structured so that they can be read independently. The key focus of this work is the application and design of Soft Computing approaches for solving problems in the following: Block Matching, Pattern Detection, Thresholding, Corner Detection, Template Matching, Circle Detection, Color Segmentation, Leukocyte Detection, and Breast Thermogram Analysis. One of the outcomes presented in this thesis involves the development of two evolutionary approaches for global optimization. These were tested over complex benchmark datasets and showed promising results, thus opening the debate for future applications. Moreover, the applications for Computer Vision and medicine presented in this work have highlighted the utility of different Soft Computing methodologies in the solution of problems in such subjects. A milestone in this area is the translation of the Computer Vision and medical issues into optimization problems. Additionally, this work also strives to provide tools for combating public health issues by expanding the concepts to automated detection and diagnosis aid for pathologies such as Leukemia and breast cancer. The application of Soft Computing techniques in this field has attracted great interest worldwide due to the exponential growth of these diseases. Lastly, the use of Fuzzy Logic, Artificial Neural Networks, and Expert Systems in many everyday domestic appliances, such as washing machines, cookers, and refrigerators is now a reality. Many other industrial and commercial applications of Soft Computing have also been integrated into everyday use, and this is expected to increase within the next decade. Therefore, the research conducted here contributes an important piece for expanding these developments. The applications presented in this work are intended to serve as technological tools that can then be used in the development of new devices

    IoT in smart communities, technologies and applications.

    Get PDF
    Internet of Things is a system that integrates different devices and technologies, removing the necessity of human intervention. This enables the capacity of having smart (or smarter) cities around the world. By hosting different technologies and allowing interactions between them, the internet of things has spearheaded the development of smart city systems for sustainable living, increased comfort and productivity for citizens. The Internet of Things (IoT) for Smart Cities has many different domains and draws upon various underlying systems for its operation, in this work, we provide a holistic coverage of the Internet of Things in Smart Cities by discussing the fundamental components that make up the IoT Smart City landscape, the technologies that enable these domains to exist, the most prevalent practices and techniques which are used in these domains as well as the challenges that deployment of IoT systems for smart cities encounter and which need to be addressed for ubiquitous use of smart city applications. It also presents a coverage of optimization methods and applications from a smart city perspective enabled by the Internet of Things. Towards this end, a mapping is provided for the most encountered applications of computational optimization within IoT smart cities for five popular optimization methods, ant colony optimization, genetic algorithm, particle swarm optimization, artificial bee colony optimization and differential evolution. For each application identified, the algorithms used, objectives considered, the nature of the formulation and constraints taken in to account have been specified and discussed. Lastly, the data setup used by each covered work is also mentioned and directions for future work have been identified. Within the smart health domain of IoT smart cities, human activity recognition has been a key study topic in the development of cyber physical systems and assisted living applications. In particular, inertial sensor based systems have become increasingly popular because they do not restrict users’ movement and are also relatively simple to implement compared to other approaches. Fall detection is one of the most important tasks in human activity recognition. With an increasingly aging world population and an inclination by the elderly to live alone, the need to incorporate dependable fall detection schemes in smart devices such as phones, watches has gained momentum. Therefore, differentiating between falls and activities of daily living (ADLs) has been the focus of researchers in recent years with very good results. However, one aspect within fall detection that has not been investigated much is direction and severity aware fall detection. Since a fall detection system aims to detect falls in people and notify medical personnel, it could be of added value to health professionals tending to a patient suffering from a fall to know the nature of the accident. In this regard, as a case study for smart health, four different experiments have been conducted for the task of fall detection with direction and severity consideration on two publicly available datasets. These four experiments not only tackle the problem on an increasingly complicated level (the first one considers a fall only scenario and the other two a combined activity of daily living and fall scenario) but also present methodologies which outperform the state of the art techniques as discussed. Lastly, future recommendations have also been provided for researchers

    Design and optimization of wireless sensor networks for localization and tracking

    Get PDF
    Knowledge of the position of nodes in a WSN is crucial in most wireless sensor network (WSN) applications. The gathered information needs to be associated with a particular location in a specific time instant in order to appropiately control de surveillance area. Moreover, WSNs may be used for tracking certain objects in monitoring applications, which also requires the incorporation of location information of the sensor nodes into the tracking algorithms. These requisites make localizacion and tracking two of the most important tasks of WSN. Despite of the large research efforts that have been made in this field, considerable technical challenges continue existing in subjects areas like data processing or communications. This thesis is mainly concerned with some of these technical problems. Specifically, we study three different challenges: sensor deployment, model independent localization and sensor selection. The first part of the work is focused on the task of sensor deployement. This is considered critical since it affects cost, detection, and localization accuracy of a WSN. There have been significant research efforts on deploying sensors from different points of view, e.g. connectivity or target detection. However, in the context of target localization, we believe it is more convenient to deploy the sensors in views of obtaining the best estimation possible on the target positioning. Therefore, in this work we suggest an analysis of the deployment from the standpoint of the error in the position estimation. To this end, we suggest the application of the modified Cram´er-Rao bound (MCRB) in a sensor network to perform a prior analysis of the system operation in the localization task. This analysis provides knowledge about the system behavior without a complete deployment. It also provides essential information to select fundamental parameters properly, like the number of sensors. To do so, a complete formulation of the modified information matrix (MFIM) and MCRB is developed for the most common measurement models, such as received signal strength (RSS), time-of-arrival (ToA) and angle-of-arrival (AoA). In addition, this formulation is extended for heterogeneous models that combine different measurement models. Simulation results demonstrate the utility of the proposed analysis and point out the similarity between MCRB and CRB. Secondly, we address the problem of target localization which encompasses many of the challenging issues which commonly arise in WSN. Consequently, many localization algorithms have been proposed in the literature each one oriented towards solving these issues. Nevertheless, it have seen tahta the localization performance of above methods usually relies heavily on the availability of accurate knowledge regarding the observation model. When errors in the measurement model are present, their target localization accuracy is degraded significantly. To overcome this problem, we proposed a novel localization algorithm to be used in applications where the measurement model is not accurate or incomplete. The independence of the algorithm from the model provides robustness and versatility. In order to do so, we apply radial basis functions (RBFs) interpolation to evaluate the measurement function in the entire surveillance area, and estimate the target position. In addition, we also propose the application of LASSO regression to compute the weigths of the RBFs and improve the generalization of the interpolated function. Simulation results have demonstrated the good performance of the proposed algorithm in the localization of single or multiples targets. Finally, we study the sensor selection problem. In order to prolong the network lifetime, sensors alternate their state between active and idle. The decision of which sensor should be activated is based on a variety of factors depending on the algorithm or the sensor application. Therefore, here we investigate the centralized selection of sensors in target-tracking applications over huge networks where a large number of randomly placed sensors are available for taking measurements. Specifically, we focus on the application of optimization algorithms for the selection of sensors using a variant of the CRB, the Posterior CRB (PCRB), as the performance-based optimization criteria. This bound provides the performance limit on the mean square error (MSE) for any unbiased estimator of a random parameter, and is iteratively computed by a particle filter (in our case, by a Rao-Blackwellized Particle Filter). In this work we analyze, and compare, three optimization algorithms: a genetic algorithm (GA), the particle swarm optimization (PSO), and a new discrete-variant of the cuckoo search (CS) algorithm. In addition, we propose a local-search versions of the previous optimization algorithms that provide a significant reduction of the computation time. Lastly, simulation results demonstrate the utility of these optmization algorithm to solve a sensor selection problem and point out the reduction of the computation time when local search is applied. ---------------------------------------------------Las redes de sensores se presentan como una tecnología muy interesante que ha atraído considerable interés por parte de los investigadores en la actualidad [1, 109]. Recientes avances en electrónica y en comunicaciones inalámbricas han permitido de desarrollo de sensores de bajo coste, baja potencia y multiples funciones, de reducido tamaño y con capacidades de comunicación a cortas distancias. Estos sensores, desplegados en gran número y unidos a través de comunicaciones inalámbricas, proporcionan grandes oportunidades en aplicaciones como la monitorización y el control de casas, ciudades o el medio ambiente. Un nodo sensor es un dispositivo de baja potencia capaz de interactuar con el medio a través de sus sensores, procesar información localmente y comunicar dicha información a tus vecinos más próximos. En el mercado existe una gran variedad de sensores (magnéticos, acústicos, térmicos, etc), lo que permite monitorizar muy diversas condiciones ambientales (temperatura, humedad, etc.) [25]. En consecuencia, las redes de sensores presentan un amplio rango de aplicaciones: seguridad en el hogar, monitorización del medio, análisis y predicción de condiciones climáticas, biomedicina [79], etc. A diferencia de las redes convencionales, las redes de sensores sus propias limitaciones, como la cantidad de energía disponible, el corto alcance de sus comunicaciones, su bajo ancho de band y sus limitaciones en el procesado de información y el almacenamiento de la misma. Por otro parte, existen limitaciones en el diseño que dependerán directamente de la aplicación que se le quiera dar a la red, como por ejemplo el tamaño de la red, el esquema de despliegue o la topología de la red..........Presidente: Jesús Cid Sueiro; Vocal: Mónica F. Bugallo; Secretario: Sancho Salcedo San
    corecore