2,415 research outputs found

    Generating approximate region boundaries from heterogeneous spatial information: an evolutionary approach

    Get PDF
    Spatial information takes different forms in different applications, ranging from accurate coordinates in geographic information systems to the qualitative abstractions that are used in artificial intelligence and spatial cognition. As a result, existing spatial information processing techniques tend to be tailored towards one type of spatial information, and cannot readily be extended to cope with the heterogeneity of spatial information that often arises in practice. In applications such as geographic information retrieval, on the other hand, approximate boundaries of spatial regions need to be constructed, using whatever spatial information that can be obtained. Motivated by this observation, we propose a novel methodology for generating spatial scenarios that are compatible with available knowledge. By suitably discretizing space, this task is translated to a combinatorial optimization problem, which is solved using a hybridization of two well-known meta-heuristics: genetic algorithms and ant colony optimization. What results is a flexible method that can cope with both quantitative and qualitative information, and can easily be adapted to the specific needs of specific applications. Experiments with geographic data demonstrate the potential of the approach

    Simulated annealing with thresheld convergence

    No full text
    Stochastic search techniques for multi-modal search spaces require the ability to balance exploration with exploitation. Exploration is required to find the best region, and exploitation is required to find the best solution (i.e. the local optimum) within this region. Compared to hill climbing which is purely exploitative, simulated annealing probabilistically allows "backward" steps which facilitate exploration. However, the balance between exploration and exploitation in simulated annealing is biased towards exploitation - improving moves are always accepted, so local (greedy) search steps can occur at even the earliest stages of the search process. The purpose of "thresheld convergence" is to have these early-stage local search steps "held" back by a threshold function. It is hypothesized that early local search steps can interfere with the effectiveness of a search technique's (concurrent) mechanisms for global search. Experiments show that the addition of thresheld convergence to simulated annealing can lead to significant performance improvements in multi-modal search spaces.IEEE Computational Intelligence Societ

    On the connection of probabilistic model checking, planning, and learning for system verification

    Get PDF
    This thesis presents approaches using techniques from the model checking, planning, and learning community to make systems more reliable and perspicuous. First, two heuristic search and dynamic programming algorithms are adapted to be able to check extremal reachability probabilities, expected accumulated rewards, and their bounded versions, on general Markov decision processes (MDPs). Thereby, the problem space originally solvable by these algorithms is enlarged considerably. Correctness and optimality proofs for the adapted algorithms are given, and in a comprehensive case study on established benchmarks it is shown that the implementation, called Modysh, is competitive with state-of-the-art model checkers and even outperforms them on very large state spaces. Second, Deep Statistical Model Checking (DSMC) is introduced, usable for quality assessment and learning pipeline analysis of systems incorporating trained decision-making agents, like neural networks (NNs). The idea of DSMC is to use statistical model checking to assess NNs resolving nondeterminism in systems modeled as MDPs. The versatility of DSMC is exemplified in a number of case studies on Racetrack, an MDP benchmark designed for this purpose, flexibly modeling the autonomous driving challenge. In a comprehensive scalability study it is demonstrated that DSMC is a lightweight technique tackling the complexity of NN analysis in combination with the state space explosion problem.Diese Arbeit präsentiert Ansätze, die Techniken aus dem Model Checking, Planning und Learning Bereich verwenden, um Systeme verlässlicher und klarer verständlich zu machen. Zuerst werden zwei Algorithmen für heuristische Suche und dynamisches Programmieren angepasst, um Extremwerte für Erreichbarkeitswahrscheinlichkeiten, Erwartungswerte für Kosten und beschränkte Varianten davon, auf generellen Markov Entscheidungsprozessen (MDPs) zu untersuchen. Damit wird der Problemraum, der ursprünglich mit diesen Algorithmen gelöst wurde, deutlich erweitert. Korrektheits- und Optimalitätsbeweise für die angepassten Algorithmen werden gegeben und in einer umfassenden Fallstudie wird gezeigt, dass die Implementierung, namens Modysh, konkurrenzfähig mit den modernsten Model Checkern ist und deren Leistung auf sehr großen Zustandsräumen sogar übertrifft. Als Zweites wird Deep Statistical Model Checking (DSMC) für die Qualitätsbewertung und Lernanalyse von Systemen mit integrierten trainierten Entscheidungsgenten, wie z.B. neuronalen Netzen (NN), eingeführt. Die Idee von DSMC ist es, statistisches Model Checking zur Bewertung von NNs zu nutzen, die Nichtdeterminismus in Systemen, die als MDPs modelliert sind, auflösen. Die Vielseitigkeit des Ansatzes wird in mehreren Fallbeispielen auf Racetrack gezeigt, einer MDP Benchmark, die zu diesem Zweck entwickelt wurde und die Herausforderung des autonomen Fahrens flexibel modelliert. In einer umfassenden Skalierbarkeitsstudie wird demonstriert, dass DSMC eine leichtgewichtige Technik ist, die die Komplexität der NN-Analyse in Kombination mit dem State Space Explosion Problem bewältigt

    Enabling sustainable power distribution networks by using smart grid communications

    Get PDF
    Smart grid modernization enables integration of computing, information and communications capabilities into the legacy electric power grid system, especially the low voltage distribution networks where various consumers are located. The evolutionary paradigm has initiated worldwide deployment of an enormous number of smart meters as well as renewable energy sources at end-user levels. The future distribution networks as part of advanced metering infrastructure (AMI) will involve decentralized power control operations under associated smart grid communications networks. This dissertation addresses three potential problems anticipated in the future distribution networks of smart grid: 1) local power congestion due to power surpluses produced by PV solar units in a neighborhood that demands disconnection/reconnection mechanisms to alleviate power overflow, 2) power balance associated with renewable energy utilization as well as data traffic across a multi-layered distribution network that requires decentralized designs to facilitate power control as well as communications, and 3) a breach of data integrity attributed to a typical false data injection attack in a smart metering network that calls for a hybrid intrusion detection system to detect anomalous/malicious activities. In the first problem, a model for the disconnection process via smart metering communications between smart meters and the utility control center is proposed. By modeling the power surplus congestion issue as a knapsack problem, greedy solutions for solving such problem are proposed. Simulation results and analysis show that computation time and data traffic under a disconnection stage in the network can be reduced. In the second problem, autonomous distribution networks are designed that take scalability into account by dividing the legacy distribution network into a set of subnetworks. A power-control method is proposed to tackle the power flow and power balance issues. Meanwhile, an overlay multi-tier communications infrastructure for the underlying power network is proposed to analyze the traffic of data information and control messages required for the associated power flow operations. Simulation results and analysis show that utilization of renewable energy production can be improved, and at the same time data traffic reduction under decentralized operations can be achieved as compared to legacy centralized management. In the third problem, an attack model is proposed that aims to minimize the number of compromised meters subject to the equality of an aggregated power load in order to bypass detection under the conventionally radial tree-like distribution network. A hybrid anomaly detection framework is developed, which incorporates the proposed grid sensor placement algorithm with the observability attribute. Simulation results and analysis show that the network observability as well as detection accuracy can be improved by utilizing grid-placed sensors. Conclusively, a number of future works have also been identified to furthering the associated problems and proposed solutions
    • …
    corecore