2,973 research outputs found

    Reinforced Labels: Multi-Agent Deep Reinforcement Learning for Point-Feature Label Placement

    Full text link
    Over the recent years, Reinforcement Learning combined with Deep Learning techniques has successfully proven to solve complex problems in various domains, including robotics, self-driving cars, and finance. In this paper, we are introducing Reinforcement Learning (RL) to label placement, a complex task in data visualization that seeks optimal positioning for labels to avoid overlap and ensure legibility. Our novel point-feature label placement method utilizes Multi-Agent Deep Reinforcement Learning to learn the label placement strategy, the first machine-learning-driven labeling method, in contrast to the existing hand-crafted algorithms designed by human experts. To facilitate RL learning, we developed an environment where an agent acts as a proxy for a label, a short textual annotation that augments visualization. Our results show that the strategy trained by our method significantly outperforms the random strategy of an untrained agent and the compared methods designed by human experts in terms of completeness (i.e., the number of placed labels). The trade-off is increased computation time, making the proposed method slower than the compared methods. Nevertheless, our method is ideal for scenarios where the labeling can be computed in advance, and completeness is essential, such as cartographic maps, technical drawings, and medical atlases. Additionally, we conducted a user study to assess the perceived performance. The outcomes revealed that the participants considered the proposed method to be significantly better than the other examined methods. This indicates that the improved completeness is not just reflected in the quantitative metrics but also in the subjective evaluation by the participants

    A point-feature label placement algorithm based on spatial data mining

    Get PDF
    The point-feature label placement (PFLP) refers to the process of positioning labels near point features on a map while adhering to specific rules and guidelines, finally obtaining clear, aesthetically pleasing, and conflict-free maps. While various approaches have been suggested for automated point feature placement on maps, few studies have fully considered the spatial distribution characteristics and label correlations of point datasets, resulting in poor label quality in the process of solving the label placement of dense and complex point datasets. In this paper, we propose a point-feature label placement algorithm based on spatial data mining that analyzes the local spatial distribution characteristics and label correlations of point features. The algorithm quantifies the interference among point features by designing a label frequent pattern framework (LFPF) and constructs an ascending label ordering method based on the pattern to reduce interference. Besides, three classical metaheuristic algorithms (simulated annealing algorithm, genetic algorithm, and ant colony algorithm) are applied to the PFLP in combination with the framework to verify the validity of this framework. Additionally, a bit-based grid spatial index is proposed to reduce cache memory and consumption time in conflict detection. The performance of the experiments is tested with 4000, 10000, and 20000 points of POI data obtained randomly under various label densities. The results of these experiments showed that: (1) the proposed method outperformed both the original algorithm and recent literature, with label quality improvements ranging from 3 to 6.7 and from 0.1 to 2.6, respectively. (2) The label efficiency was improved by 58.2% compared with the traditional grid index

    Modern Power System Dynamic Performance Improvement through Big Data Analysis

    Get PDF
    Higher penetration of Renewable Energy (RE) is causing generation uncertainty and reduction of system inertia for the modern power system. This phenomenon brings more challenges on the power system dynamic behavior, especially the frequency oscillation and excursion, voltage and transient stability problems. This dissertation work extracts the most useful information from the power system features and improves the system dynamic behavior by big data analysis through three aspects: inertia distribution estimation, actuator placement, and operational studies.First of all, a pioneer work for finding the physical location of COI in the system and creating accurate and useful inertia distribution map is presented. Theoretical proof and dynamic simulation validation have been provided to support the proposed method for inertia distribution estimation based on measurement PMU data. Estimation results are obtained for a radial system, a meshed system, IEEE 39 bus-test system, the Chilean system, and a real utility system in the US. Then, this work provided two control actuator placement strategy using measurement data samples and machine learning algorithms. The first strategy is for the system with single oscillation mode. Control actuators should be placed at the bus that are far away from the COI bus. This rule increased damping ratio of eamples systems up to 14\% and hugely reduced the computational complexity from the simulation results of the Chilean system. The second rule is created for system with multiple dynamic problems. General and effective guidance for planners is obtained for IEEE 39-bus system and IEEE 118-bus system using machine learning algorithms by finding the relationship between system most significant features and system dynamic performance. Lastly, it studied the real-time voltage security assessment and key link identification in cascading failure analysis. A proposed deep-learning framework has Achieved the highest accuracy and lower computational time for real-time security analysis. In addition, key links are identified through distance matrix calculation and probability tree generation using 400,000 data samples from the Western Electricity Coordinating Council (WECC) system

    Energy and performance-optimized scheduling of tasks in distributed cloud and edge computing systems

    Get PDF
    Infrastructure resources in distributed cloud data centers (CDCs) are shared by heterogeneous applications in a high-performance and cost-effective way. Edge computing has emerged as a new paradigm to provide access to computing capacities in end devices. Yet it suffers from such problems as load imbalance, long scheduling time, and limited power of its edge nodes. Therefore, intelligent task scheduling in CDCs and edge nodes is critically important to construct energy-efficient cloud and edge computing systems. Current approaches cannot smartly minimize the total cost of CDCs, maximize their profit and improve quality of service (QoS) of tasks because of aperiodic arrival and heterogeneity of tasks. This dissertation proposes a class of energy and performance-optimized scheduling algorithms built on top of several intelligent optimization algorithms. This dissertation includes two parts, including background work, i.e., Chapters 3–6, and new contributions, i.e., Chapters 7–11. 1) Background work of this dissertation. Chapter 3 proposes a spatial task scheduling and resource optimization method to minimize the total cost of CDCs where bandwidth prices of Internet service providers, power grid prices, and renewable energy all vary with locations. Chapter 4 presents a geography-aware task scheduling approach by considering spatial variations in CDCs to maximize the profit of their providers by intelligently scheduling tasks. Chapter 5 presents a spatio-temporal task scheduling algorithm to minimize energy cost by scheduling heterogeneous tasks among CDCs while meeting their delay constraints. Chapter 6 gives a temporal scheduling algorithm considering temporal variations of revenue, electricity prices, green energy and prices of public clouds. 2) Contributions of this dissertation. Chapter 7 proposes a multi-objective optimization method for CDCs to maximize their profit, and minimize the average loss possibility of tasks by determining task allocation among Internet service providers, and task service rates of each CDC. A simulated annealing-based bi-objective differential evolution algorithm is proposed to obtain an approximate Pareto optimal set. A knee solution is selected to schedule tasks in a high-profit and high-quality-of-service way. Chapter 8 formulates a bi-objective constrained optimization problem, and designs a novel optimization method to cope with energy cost reduction and QoS improvement. It jointly minimizes both energy cost of CDCs, and average response time of all tasks by intelligently allocating tasks among CDCs and changing task service rate of each CDC. Chapter 9 formulates a constrained bi-objective optimization problem for joint optimization of revenue and energy cost of CDCs. It is solved with an improved multi-objective evolutionary algorithm based on decomposition. It determines a high-quality trade-off between revenue maximization and energy cost minimization by considering CDCs’ spatial differences in energy cost while meeting tasks’ delay constraints. Chapter 10 proposes a simulated annealing-based bees algorithm to find a close-to-optimal solution. Then, a fine-grained spatial task scheduling algorithm is designed to minimize energy cost of CDCs by allocating tasks among multiple green clouds, and specifies running speeds of their servers. Chapter 11 proposes a profit-maximized collaborative computation offloading and resource allocation algorithm to maximize the profit of systems and guarantee that response time limits of tasks are met in cloud-edge computing systems. A single-objective constrained optimization problem is solved by a proposed simulated annealing-based migrating birds optimization. This dissertation evaluates these algorithms, models and software with real-life data and proves that they improve scheduling precision and cost-effectiveness of distributed cloud and edge computing systems

    A multi-agent system for on-the-fly web map generation and spatial conflict resolution

    Get PDF
    Résumé Internet est devenu un moyen de diffusion de l’information géographique par excellence. Il offre de plus en plus de services cartographiques accessibles par des milliers d’internautes à travers le monde. Cependant, la qualité de ces services doit être améliorée, principalement en matière de personnalisation. A cette fin, il est important que la carte générée corresponde autant que possible aux besoins, aux préférences et au contexte de l’utilisateur. Ce but peut être atteint en appliquant les transformations appropriées, en temps réel, aux objets de l’espace à chaque cycle de génération de la carte. L’un des défis majeurs de la génération d’une carte à la volée est la résolution des conflits spatiaux qui apparaissent entre les objets, essentiellement à cause de l’espace réduit des écrans d’affichage. Dans cette thèse, nous proposons une nouvelle approche basée sur la mise en œuvre d’un système multiagent pour la génération à la volée des cartes et la résolution des conflits spatiaux. Cette approche est basée sur l’utilisation de la représentation multiple et la généralisation cartographique. Elle résout les conflits spatiaux et génère les cartes demandées selon une stratégie innovatrice : la génération progressive des cartes par couches d’intérêt. Chaque couche d’intérêt contient tous les objets ayant le même degré d’importance pour l’utilisateur. Ce contenu est déterminé à la volée au début du processus de génération de la carte demandée. Notre approche multiagent génère et transfère cette carte suivant un mode parallèle. En effet, une fois une couche d’intérêt générée, elle est transmise à l’utilisateur. Dans le but de résoudre les conflits spatiaux, et par la même occasion générer la carte demandée, nous affectons un agent logiciel à chaque objet de l’espace. Les agents entrent ensuite en compétition pour l’occupation de l’espace disponible. Cette compétition est basée sur un ensemble de priorités qui correspondent aux différents degrés d’importance des objets pour l’utilisateur. Durant la résolution des conflits, les agents prennent en considération les besoins et les préférences de l’utilisateur afin d’améliorer la personnalisation de la carte. Ils améliorent la lisibilité des objets importants et utilisent des symboles qui pourraient aider l’utilisateur à mieux comprendre l’espace géographique. Le processus de génération de la carte peut être interrompu en tout temps par l’utilisateur lorsque les données déjà transmises répondent à ses besoins. Dans ce cas, son temps d’attente est réduit, étant donné qu’il n’a pas à attendre la génération du reste de la carte. Afin d’illustrer notre approche, nous l’appliquons au contexte de la cartographie sur le web ainsi qu’au contexte de la cartographie mobile. Dans ces deux contextes, nous catégorisons nos données, qui concernent la ville de Québec, en quatre couches d’intérêt contenant les objets explicitement demandés par l’utilisateur, les objets repères, le réseau routier et les objets ordinaires qui n’ont aucune importance particulière pour l’utilisateur. Notre système multiagent vise à résoudre certains problèmes liés à la génération à la volée des cartes web. Ces problèmes sont les suivants : 1. Comment adapter le contenu des cartes, à la volée, aux besoins des utilisateurs ? 2. Comment résoudre les conflits spatiaux de manière à améliorer la lisibilité de la carte tout en prenant en considération les besoins de l’utilisateur ? 3. Comment accélérer la génération et le transfert des données aux utilisateurs ? Les principales contributions de cette thèse sont : 1. La résolution des conflits spatiaux en utilisant les systèmes multiagent, la généralisation cartographique et la représentation multiple. 2. La génération des cartes dans un contexte web et dans un contexte mobile, à la volée, en utilisant les systèmes multiagent, la généralisation cartographique et la représentation multiple. 3. L’adaptation des contenus des cartes, en temps réel, aux besoins de l’utilisateur à la source (durant la première génération de la carte). 4. Une nouvelle modélisation de l’espace géographique basée sur une architecture multi-couches du système multiagent. 5. Une approche de génération progressive des cartes basée sur les couches d’intérêt. 6. La génération et le transfert, en parallèle, des cartes aux utilisateurs, dans les contextes web et mobile.Abstract Internet is a fast growing medium to get and disseminate geospatial information. It provides more and more web mapping services accessible by thousands of users worldwide. However, the quality of these services needs to be improved, especially in term of personalization. In order to increase map flexibility, it is important that the map corresponds as much as possible to the user’s needs, preferences and context. This may be possible by applying the suitable transformations, in real-time, to spatial objects at each map generation cycle. An underlying challenge of such on-the-fly map generation is to solve spatial conflicts that may appear between objects especially due to lack of space on display screens. In this dissertation, we propose a multiagent-based approach to address the problems of on-the-fly web map generation and spatial conflict resolution. The approach is based upon the use of multiple representation and cartographic generalization. It solves conflicts and generates maps according to our innovative progressive map generation by layers of interest approach. A layer of interest contains objects that have the same importance to the user. This content, which depends on the user’s needs and the map’s context of use, is determined on-the-fly. Our multiagent-based approach generates and transfers data of the required map in parallel. As soon as a given layer of interest is generated, it is transmitted to the user. In order to generate a given map and solve spatial conflicts, we assign a software agent to every spatial object. Then, the agents compete for space occupation. This competition is driven by a set of priorities corresponding to the importance of objects for the user. During processing, agents take into account users’ needs and preferences in order to improve the personalization of the final map. They emphasize important objects by improving their legibility and using symbols in order to help the user to better understand the geographic space. Since the user can stop the map generation process whenever he finds the required information from the amount of data already transferred, his waiting delays are reduced. In order to illustrate our approach, we apply it to the context of tourist web and mobile mapping applications. In these contexts, we propose to categorize data into four layers of interest containing: explicitly required objects, landmark objects, road network and ordinary objects which do not have any specific importance for the user. In this dissertation, our multiagent system aims at solving the following problems related to on-the-fly web mapping applications: 1. How can we adapt the contents of maps to users’ needs on-the-fly? 2. How can we solve spatial conflicts in order to improve the legibility of maps while taking into account users’ needs? 3. How can we speed up data generation and transfer to users? The main contributions of this thesis are: 1. The resolution of spatial conflicts using multiagent systems, cartographic generalization and multiple representation. 2. The generation of web and mobile maps, on-the-fly, using multiagent systems, cartographic generalization and multiple representation. 3. The real-time adaptation of maps’ contents to users’ needs at the source (during the first generation of the map). 4. A new modeling of the geographic space based upon a multi-layers multiagent system architecture. 5. A progressive map generation approach by layers of interest. 6. The generation and transfer of web and mobile maps at the same time to users

    Electroencephalography brain computer interface using an asynchronous protocol

    Get PDF
    A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in ful llment of the requirements for the degree of Master of Science. October 31, 2016.Brain Computer Interface (BCI) technology is a promising new channel for communication between humans and computers, and consequently other humans. This technology has the potential to form the basis for a paradigm shift in communication for people with disabilities or neuro-degenerative ailments. The objective of this work is to create an asynchronous BCI that is based on a commercial-grade electroencephalography (EEG) sensor. The BCI is intended to allow a user of possibly low income means to issue control signals to a computer by using modulated cortical activation patterns as a control signal. The user achieves this modulation by performing a mental task such as imagining waving the left arm until the computer performs the action intended by the user. In our work, we make use of the Emotiv EPOC headset to perform the EEG measurements. We validate our models by assessing their performance when the experimental data is collected using clinical-grade EEG technology. We make use of a publicly available data-set in the validation phase. We apply signal processing concepts to extract the power spectrum of each electrode from the EEG time-series data. In particular, we make use of the fast Fourier transform (FFT). Specific bands in the power spectra are used to construct a vector that represents an abstract state the brain is in at that particular moment. The selected bands are motivated by insights from neuroscience. The state vector is used in conjunction with a model that performs classification. The exact purpose of the model is to associate the input data with an abstract classification result which can then used to select the appropriate set of instructions to be executed by the computer. In our work, we make use of probabilistic graphical models to perform this association. The performance of two probabilistic graphical models is evaluated in this work. As a preliminary step, we perform classification on pre-segmented data and we assess the performance of the hidden conditional random fields (HCRF) model. The pre-segmented data has a trial structure such that each data le contains the power spectra measurements associated with only one mental task. The objective of the assessment is to determine how well the HCRF models the spatio-spectral and temporal relationships in the EEG data when mental tasks are performed in the aforementioned manner. In other words, the HCRF is to model the internal dynamics of the data corresponding to the mental task. The performance of the HCRF is assessed over three and four classes. We find that the HCRF can model the internal structure of the data corresponding to different mental tasks. As the final step, we perform classification on continuous data that is not segmented and assess the performance of the latent dynamic conditional random fields (LDCRF). The LDCRF is used to perform sequence segmentation and labeling at each time-step so as to allow the program to determine which action should be taken at that moment. The sequence segmentation and labeling is the primary capability that we require in order to facilitate an asynchronous BCI protocol. The continuous data has a trial structure such that each data le contains the power spectra measurements associated with three different mental tasks. The mental tasks are randomly selected at 15 second intervals. The objective of the assessment is to determine how well the LDCRF models the spatio-spectral and temporal relationships in the EEG data, both within each mental task and in the transitions between mental tasks. The performance of the LDCRF is assessed over three classes for both the publicly available data and the data we obtained using the Emotiv EPOC headset. We find that the LDCRF produces a true positive classification rate of 82.31% averaged over three subjects, on the validation data which is in the publicly available data. On the data collected using the Emotiv EPOC, we find that the LDCRF produces a true positive classification rate of 42.55% averaged over two subjects. In the two assessments involving the LDCRF, the random classification strategy would produce a true positive classification rate of 33.34%. It is thus clear that our classification strategy provides above random performance on the two groups of data-sets. We conclude that our results indicate that creating low-cost EEG based BCI technology holds potential for future development. However, as discussed in the final chapter, further work on both the software and low-cost hardware aspects is required in order to improve the performance of the technology as it relates to the low-cost context.LG201

    Applied Metaheuristic Computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC

    Robust leak localization in water distribution networks using machine learning techniques

    Get PDF
    Aplicat embargament des de la data de lectura fins el 20 de desembre de 2019This PhD thesis presents a methodology to detect, estimate and localize water leaks (with the main focus in the localization problem) in water distribution networks using hydraulic models and machine learning techniques. The actual state of the art is introduced, the theoretical basis of the machine learning techniques applied are explained and the hydraulic model is also detailed. The whole methodology is presented and tested into different water distribution networks and district metered areas based on simulated and real case studies and compared with published methods. The focus of the contributions is to bring more robust methods against the uncertainties that effects the problem of leak detection, by dealing with them using the self-similarity to create features monitored by the change detection technique intersection-of-confidence-interval, and the leak localization where the problem is tackled using machine learning techniques. By using those techniques, it is expected to learn the leak behavior considering their uncertainty to be used in the diagnosis stage after the training phase. One method for the leak detection problem is presented that is able to estimate the leak size and the time that the leak has been produced. This method captures the normal, leak-free, behavior and contrast it with the new measurements in order to evaluate the state of the network. If the behavior is not normal check if it is due to a leak. To have a more robust leak detection method, a specific validation is designed to operate specifically with leaks and in the temporal region where the leak is most apparent. A methodology to extent the current model-based approach to localize water leaks by means of classifiers is proposed where the non-parametric k-nearest neighbors classifier and the parametric multi-class Bayesian classifier are proposed. A new data-driven approach to localize leaks using a multivariate regression technique without the use of hydraulic models is also introduced. This method presents a clear benefit over the model-based technique by removing the need of the hydraulic model despite of the topological information is still required. Also, the information of the expected leaks is not required since information of the expected hydraulic behavior with leak is exploited to find the place where the leak is more suitable. This method has a good performance in practice, but is very sensitive to the number of sensor in the network and their sensor placement. The proposed sensor placement techniques reduce the computational load required to take into account the amount of data needed to model the uncertainty compared with other optimization approaches while are designed to work with the leak localization problem. More precisely, the proposed hybrid feature selection technique for sensor placement is able to work with any method that can be evaluated with confusion matrix and still being specialized for the leak localization task. This last method is good for a few sensors, but lacks of precision when the number of sensors to place is large. To overcome this problem an incremental sensor placement is proposed which is better for a larger number of sensors to place but worse when the number is small.Aquesta tesi presenta una nova metodologia per a localització de fuites en xarxes de distribució d'aigua potable. Primer s'ha revisat l'estat del art actual i les bases teòriques tant de les tècniques de machine learning utilitzades al llarg de la tesi com els mètodes existents de localització de fuites. La metodologia presentada s'ha provat en diferents xarxes d'aigua simulades i reals, comparant el resultats amb altres mètodes publicats. L'objectiu principal de la contribució aportada és el de desenvolupar mètodes més robustos enfront les incerteses que afecten a la localització de fuites. En el cas de la detecció i estimació de la magnitud de la fuita, s'utilitza la tècnica self-similarity per crear els indicadors es monitoritzen amb la tècnica de detecció de canvis ("intersection-of-confidence-intervals"). En el cas de la localització de les fuites, s'han fet servir les tècniques de classificadors i interpoladors provinents del machine learning. A l'utilitzar aquestes tècniques s'espera captar el comportament de la fuita i de la incertesa per aprendre i tenir-ho en compte en la fase de la localització de la fuita. El mètode de la detecció de fallades proposat és capaç d'estimar la magnitud de la fuita i l'instant en que s'ha produït. Aquest mètode captura el comportament normal, sense fuita, i el contrasta amb les noves mesures per avaluar l'estat de la xarxa. En el cas que el comportament no sigui el normal, es procedeix a comprovar si això és degut a una fuita. Per tenir una mètode de detecció més robust, es fa servir una capa de validació especialment dissenyada per treballar específicament amb fuites i en la regió temporal en que la fuita és més evident. Per tal de millorar l'actual metodologia de localització de fuites mitjançant models hidràulics s'ha proposat l'ús de classificadors. Per una banda es proposa el classificador no paramètric k-nearest neighbors i per l'altre banda el classificador Bayesià paramètric per múltiples classes. Finalment, s'ha desenvolupat un nou mètode de localització de fuites basat en models de dades mitjançant la regressió de múltiples paràmetres sense l'ús del model hidràulic de la xarxa. Finalment, s'ha tractat el problema de la col·locació de sensors. El rendiment de la localització de fuites està relacionada amb la col·locació de sensors i és particular per a cada mètode de localització. Amb l'objectiu de maximitzar el rendiment dels mètodes de localització de fuites presentats anteriorment, es presenten i avaluen tècniques de col·locació de sensors específicament dissenyats ja que el problema de combinatòria no es pot manejar intentant cada possible combinació de sensors a part de les xarxes més petites amb pocs sensors per instal·lar. Aquestes tècniques de col·locació de sensors exploten el potencial de les tècniques de selecció de variables per tal de realitzar la tasca desitjada.Esta tesis doctoral presenta una nueva metodología para detectar, estimar el tamaño y localizar fugas de agua (donde el foco principal está puesto en el problema de la localización de fugas) en redes de distribución de agua potable. La tesis presenta una revisión del estado actual y las bases de las técnicas de machine learning que se aplican, así como una explicación del modelo hidráulico de las redes de agua. El conjunto de la metodología se presenta y prueba en diferentes redes de distribución de agua y sectores de consumo con casos de estudio simulados y reales, y se compara con otros métodos ya publicados. La contribución principal es la de desarrollar métodos más robustos frente a la incertidumbre de los datos. En la detección de fugas, la incertidumbre se trata con la técnica del self-similarity para la generación de indicadores que luego son monitoreados per la técnica de detección de cambios conocida como intersection-of-confidece-interval. En la localización de fugas el problema de la incertidumbre se trata con técnicas de machine learning. Al utilizar estas técnicas se espera aprender el comportamiento de la fuga y su incertidumbre asociada para tenerlo en cuenta en la fase de diagnóstico. El método presentado para la detección de fugas tiene la habilidad de estimar la magnitud y el instante en que la fuga se ha producido. Este método captura el comportamiento normal, sin fugas, del sistema y lo contrasta con las nuevas medidas para evaluar el estado actual de la red. En el caso de que el comportamiento no sea el normal, se comprueba si es debido a la presencia de una fuga en el sistema. Para obtener un método de detección más robusto, se considera una capa de validación especialmente diseñada para trabajar específicamente con fugas y durante el periodo temporal donde la fuga es más evidente. Esta técnica se compara con otras ya publicadas proporcionando una detección más fiable, especialmente en el caso de fugas pequeñas, al mismo tiempo que proporciona más información que puede ser usada en la fase de la localización de la fuga permitiendo mejorarla. El principal problema es que el método es más lento que los otros métodos analizados. Con el fin de mejorar la actual metodología de localización de fugas mediante modelos hidráulicos, se propone la utilización de clasificadores. Concretamente, se propone el clasificador no paramétrico k-nearest neighbors y el clasificador Bayesiano paramétrico para múltiples clases. La propuesta de localización de fugas mediante modelos hidráulicos y clasificadores permite gestionar la incertidumbre de los datos mejor para obtener un diagnóstico de la localización de la fuga más preciso. El principal inconveniente recae en el coste computacional, aunque no se realiza en tiempo real, de los datos necesarios por el clasificador para aprender correctamente la dispersión de los datos. Además, el método es muy dependiente de la calidad del modelo hidráulico de la red. En el campo de la localización de fugas, se a propuesto un nuevo método de localización de fugas basado en modelos de datos mediante la regresión de múltiples parámetros sin el uso de modelo hidráulico. Este método presenta un claro beneficio respecto a las técnicas basadas en modelos hidráulicos ya que prescinde de su uso, aunque la información topológica de la red es aún necesaria. Además, la información del comportamiento de la red para cada fuga no es necesario, ya que el conocimiento del efecto hidráulico de una fuga en un determinado punto de la red es utilizado para la localización. Este método ha dado muy buenos resultados en la práctica, aunque es muy sensible al número de sensores y a su colocación en la red. Finalmente, se trata el problema de la colocación de sensores. El desempeño de la localización de fugas está ligado a la colocación de los sensores y es particular para cada método. Con el objetivo de maximizar el desempeño de los métodos de localización de fugas presentados, técnicas de colocación de sensores específicamente diseñados para ellos se han presentado y evaluado. Dado que el problema de combinatoria que presenta no puede ser tratado analizando todas las posibles combinaciones de sensores excepto en las redes más pequeñas con unos pocos sensores para instalar. Estas técnicas de colocación de sensores explotan el potencial de las técnicas de selección de variables para realizar la tarea deseada. Las técnicas de colocación de sensores propuestas reducen la carga computacional, requerida para tener en cuenta todos los datos necesarios para modelar bien la incertidumbre, comparado con otras propuestas de optimización al mismo tiempo que están diseñadas para trabajar en la tarea de la localización de fugas. Más concretamente, la propuesta basada en la técnica híbrida de selección de variables para la colocación de sensores es capaz de trabajar con cualquier técnica de localización de fugas que se pueda evaluar con la matriz de confusión y ser a la vez óptimo. Este método es muy bueno para la colocación de sensores, pero el rendimiento disminuye a medida que el número de sensores a colocar crece. Para evitar este problema, se propone método de colocación de sensores de forma incremental que presenta un mejor rendimiento para un número alto de sensores a colocar, aunque no es tan eficaz con pocos sensores a colocar.Postprint (published version
    • …
    corecore