106 research outputs found

    Scalable Algorithms for the Analysis of Massive Networks

    Get PDF
    Die Netzwerkanalyse zielt darauf ab, nicht-triviale Erkenntnisse aus vernetzten Daten zu gewinnen. Beispiele für diese Erkenntnisse sind die Wichtigkeit einer Entität im Verhältnis zu anderen nach bestimmten Kriterien oder das Finden des am besten geeigneten Partners für jeden Teilnehmer eines Netzwerks - bekannt als Maximum Weighted Matching (MWM). Da der Begriff der Wichtigkeit an die zu betrachtende Anwendung gebunden ist, wurden zahlreiche Zentralitätsmaße eingeführt. Diese Maße stammen hierbei aus Jahrzehnten, in denen die Rechenleistung sehr begrenzt war und die Netzwerke im Vergleich zu heute viel kleiner waren. Heute sind massive Netzwerke mit Millionen von Kanten allgegenwärtig und eine triviale Berechnung von Zentralitätsmaßen ist oft zu zeitaufwändig. Darüber hinaus ist die Suche nach der Gruppe von k Knoten mit hoher Zentralität eine noch kostspieligere Aufgabe. Skalierbare Algorithmen zur Identifizierung hochzentraler (Gruppen von) Knoten in großen Graphen sind von großer Bedeutung für eine umfassende Netzwerkanalyse. Heutigen Netzwerke verändern sich zusätzlich im zeitlichen Verlauf und die effiziente Aktualisierung der Ergebnisse nach einer Änderung ist eine Herausforderung. Effiziente dynamische Algorithmen sind daher ein weiterer wesentlicher Bestandteil moderner Analyse-Pipelines. Hauptziel dieser Arbeit ist es, skalierbare algorithmische Lösungen für die zwei oben genannten Probleme zu finden. Die meisten unserer Algorithmen benötigen Sekunden bis einige Minuten, um diese Aufgaben in realen Netzwerken mit bis zu Hunderten Millionen von Kanten zu lösen, was eine deutliche Verbesserung gegenüber dem Stand der Technik darstellt. Außerdem erweitern wir einen modernen Algorithmus für MWM auf dynamische Graphen. Experimente zeigen, dass unser dynamischer MWM-Algorithmus Aktualisierungen in Graphen mit Milliarden von Kanten in Millisekunden bewältigt.Network analysis aims to unveil non-trivial insights from networked data by studying relationship patterns between the entities of a network. Among these insights, a popular one is to quantify the importance of an entity with respect to the others according to some criteria. Another one is to find the most suitable matching partner for each participant of a network knowing the pairwise preferences of the participants to be matched with each other - known as Maximum Weighted Matching (MWM). Since the notion of importance is tied to the application under consideration, numerous centrality measures have been introduced. Many of these measures, however, were conceived in a time when computing power was very limited and networks were much smaller compared to today's, and thus scalability to large datasets was not considered. Today, massive networks with millions of edges are ubiquitous, and a complete exact computation for traditional centrality measures are often too time-consuming. This issue is amplified if our objective is to find the group of k vertices that is the most central as a group. Scalable algorithms to identify highly central (groups of) vertices on massive graphs are thus of pivotal importance for large-scale network analysis. In addition to their size, today's networks often evolve over time, which poses the challenge of efficiently updating results after a change occurs. Hence, efficient dynamic algorithms are essential for modern network analysis pipelines. In this work, we propose scalable algorithms for identifying important vertices in a network, and for efficiently updating them in evolving networks. In real-world graphs with hundreds of millions of edges, most of our algorithms require seconds to a few minutes to perform these tasks. Further, we extend a state-of-the-art algorithm for MWM to dynamic graphs. Experiments show that our dynamic MWM algorithm handles updates in graphs with billion edges in milliseconds

    Algorithms for the Identification of Central Nodes in Large Real-World Networks

    Get PDF

    Recent Advances in Fully Dynamic Graph Algorithms

    Full text link
    In recent years, significant advances have been made in the design and analysis of fully dynamic algorithms. However, these theoretical results have received very little attention from the practical perspective. Few of the algorithms are implemented and tested on real datasets, and their practical potential is far from understood. Here, we present a quick reference guide to recent engineering and theory results in the area of fully dynamic graph algorithms

    Pathfinding Algorithm Optimization Via Evolution

    Get PDF
    Pathfinding is a popular computer science problem in both academic research and industrial development. The objective of pathfinding is to search for a path, often the shortest path, from one location to another on a graph. Many real world applications can be considered as pathfinding problems, including motion planning, video games, logistics, and decision making. Computer scientists have proposed different algorithms to efficiently search for the shortest path. A* search algorithm is the de facto pathfinding algorithm that uses a heuristic function to determine the best action to take based on the given information. It is the most popular pathfinding algorithm due to its simplicity and efficiency. The performance of A* is heavily dependent on the quality of the heuristic function. The heuristic function determines the search speed, accuracy, and memory consumption. Hence, designing good heuristic functions for specific domains becomes the primary research focus on pathfinding algorithm optimization. In this dissertation, we address and solve several commonly known challenges in pathfinding problems and A* algorithm. First, designing new heuristic functions is a difficult and time-consuming task, especially when they are used to solve complex problems. The task requires the user to have expert knowledge of the problem. Moreover, a single heuristic function might not be enough to digest all the provided information and return the best guidance during the search. Previous works suggest that multiple heuristics for complex problems can dramatically speed up the search. However, choosing the appropriate combination of heuristic functions is tricky. Current optimization approaches rely on hand-tuning the parameters via trial and error by engineers over many iterations. There is a need to reduce the difficulty of designing heuristic functions for search performance maximization. Our first contribution is to propose an improved A* with a self-evolving heuristic function named Evolutionary Heuristic A* (EHA*) that reduces engineering effort to design the heuristic function for A* and maximize the search performance. Our experiment results show that EHA* (i) preserves path optimality; (ii) is not limited to a particular application; (iii) speeds up the path searching process; and (iv) most importantly, dramatically reduces the difficulty for software engineers to design heuristic functions for A* search. Moreover, our work can be applied to other existing works on the performance improvement of A* search. Search, A* search suffers from poor performance on large search spaces. Although EHA* improves the quality of heuristic functions, large search space still leads to many unnecessary searches. Our second contribution is Regions Discovery Algorithm (RDA), a map clustering technique to partition a grid based map into different categories to reduce search spaces and increase search speed. Our approach reduces the size of search spaces by partitioning a graph into many segments and identifying the segments by their characteristics. By identifying segments in different categories, we can easily eliminate search space, such as rooms, that are not possible (better use needed?) to be part of the optimal solution. Unlike the existing approaches that might result in non-optimal solutions, our experiment results show that RDA guarantees optimal solutions. Our third contribution, the Hierarchical Evolutionary Heuristic A* (HEHA*), further improves the search ability of handling complex pathfinding problems and boosting the search performance, by reducing search spaces and exploiting parallelism techniques. HEHA* combines the strength of EHA* and RDA to reduce search spaces and improve search speed. HEHA* shows that it provides better search performance with less memory consumption. In the pre-processing phase, first HEHA* partitions a graph into different segments and then applies different optimized heuristic functions for each segment to maximize the search performance. During the online process, HEHA* searches on the abstract level first to reduce search area, and exploits parallelism to speed up the search. Fourth, we improve and apply HEHA* to Multi-Agent Pathfinding (MAPF) problems. MAPF is the fundamental problem of many robotic and logistic applications, where the main constraint is that all agents can find the shortest paths while not colliding with each other. While the current trend favors the central controlled system, our approach is to develop a distributed version of HEHA* that can efficiently plan the optimal path for each agent. Such a system requires data sharing and exchanging among the agents, so that each agent can make its own decision without a supervising system. Our experiment results show that the Multi-Agent version of HEHA* maintains a high success rate when the number of agents increases. While EHA* and HEHA* provide a novel approach for heuristic function design, the pre-processing times are not trivial. To boost the performance of the preprocessing steps in EHA* and HEHA*, we propose a FPGA-based reconfigurable hardware accelerator that is not bound to any specific applications as our fifth contribution. Since GA requires many independent processes, it is suitable to implement it in a hardware accelerator to gain maximum performance. We apply the following techniques to enhance performance: deep pipelining, reconfigurable computing, massive parallel processing, and degree of parallelism maximization. Our results show that the FPGA accelerator for EHA* improves the scalability, throughput, and latency

    The collaborative iterative search approach to multi agent path finding

    Get PDF
    PhD ThesisThis thesis presents a new approach to obtaining optimal and complete solutions to Multi Agent Path Finding (MAPF) problems called Collaborative Iterative Search (CIS). CIS employs a conflict based scheme inspired by the Conflict Based Search (CBS) algorithm and extends this to include a linear order lower level search. The structure of Planar Graphs is leveraged, permitting further optimization of the algorithm. This takes the form of reasoning-based culling of the search space, while maintaining optimality and completeness. Benchmarks provided demonstrate significant performance gains over the existing state of the art, particularly in the case of sparsely populated maps. The thesis draws to a conclusion with a summary of proposed future work

    On-board Trajectory Computation for Mars Atmospheric Entry Based on Parametric Sensitivity Analysis of Optimal Control Problems

    Get PDF
    This thesis develops a precision guidance algorithm for the entry of a small capsule into the atmosphere of Mars. The entry problem is treated as nonlinear optimal control problem and the thesis focuses on developing a suboptimal feedback law. Therefore parametric sensitivity analysis of optimal control problems is combined with dynamic programming. This approach enables a real-time capable, locally suboptimal feedback scheme. The optimal control problem is initially considered in open loop fashion. To synthesize the feedback law, the optimal control problem is embedded into a family of neighboring problems, which are described by a parameter vector. The optimal solution for a nominal set of parameters is determined using direct optimization methods. In addition the directional derivatives (sensitivities) of the optimal solution with respect to the parameters are computed. Knowledge of the nominal solution and the sensitivities allows, under certain conditions, to apply Taylor series expansion to approximate the optimal solution for disturbed parameters almost instantly. Additional correction steps can be applied to improve the optimality of the solution and to eliminate errors in the constraints. To transfer this strategy to the closed loop system, the computation of the sensitivities is performed with respect to different initial conditions. Determining the perturbation direction and interpolating between sensitivities of neighboring initial conditions allows the approximation of the extremal field in a neighborhood of the nominal trajectory. This constitutes a locally suboptimal feedback law. The proposed strategy is applied to the atmospheric entry problem. The developed algorithm is part of the main control loop, i.e. optimal controls and trajectories are computed at a fixed rate, taking into account the current state and parameters. This approach is combined with a trajectory tracking controller based on the aerodynamic drag. The performance and the strengthsa and weaknesses of this two degree of freedom guidance system are analyzed using Monte Carlo simulation. Finally the real-time capability of the proposed algorithm is demonstrated in a flight representative processor-in-the-loop environment

    A Partially Randomized Approach to Trajectory Planning and Optimization for Mobile Robots with Flat Dynamics

    Get PDF
    Motion planning problems are characterized by huge search spaces and complex obstacle structures with no concise mathematical expression. The fixed-wing airplane application considered in this thesis adds differential constraints and point-wise bounds, i. e. an infinite number of equality and inequality constraints. An optimal trajectory planning approach is presented, based on the randomized Rapidly-exploring Random Trees framework (RRT*). The local planner relies on differential flatness of the equations of motion to obtain tree branch candidates that automatically satisfy the differential constraints. Flat output trajectories, in this case equivalent to the airplane's flight path, are designed using Bézier curves. Segment feasibility in terms of point-wise inequality constraints is tested by an indicator integral, which is evaluated alongside the segment cost functional. Although the RRT* guarantees optimality in the limit of infinite planning time, it is argued by intuition and experimentation that convergence is not approached at a practically useful rate. Therefore, the randomized planner is augmented by a deterministic variational optimization technique. To this end, the optimal planning task is formulated as a semi-infinite optimization problem, using the intermediate result of the RRT(*) as an initial guess. The proposed optimization algorithm follows the feasible flavor of the primal-dual interior point paradigm. Discretization of functional (infinite) constraints is deferred to the linear subproblems, where it is realized implicitly by numeric quadrature. An inherent numerical ill-conditioning of the method is circumvented by a reduction-like approach, which tracks active constraint locations by introducing new problem variables. Obstacle avoidance is achieved by extending the line search procedure and dynamically adding obstacle-awareness constraints to the problem formulation. Experimental evaluation confirms that the hybrid approach is practically feasible and does indeed outperform RRT*'s built-in optimization mechanism, but the computational burden is still significant.Bewegungsplanungsaufgaben sind typischerweise gekennzeichnet durch umfangreiche Suchräume, deren vollständige Exploration nicht praktikabel ist, sowie durch unstrukturierte Hindernisse, für die nur selten eine geschlossene mathematische Beschreibung existiert. Bei der in dieser Arbeit betrachteten Anwendung auf Flächenflugzeuge kommen differentielle Randbedingungen und beschränkte Systemgrößen erschwerend hinzu. Der vorgestellte Ansatz zur optimalen Trajektorienplanung basiert auf dem Rapidly-exploring Random Trees-Algorithmus (RRT*), welcher die Suchraumkomplexität durch Randomisierung beherrschbar macht. Der spezifische Beitrag ist eine Realisierung des lokalen Planers zur Generierung der Äste des Suchbaums. Dieser erfordert ein flaches Bewegungsmodell, sodass differentielle Randbedingungen automatisch erfüllt sind. Die Trajektorien des flachen Ausgangs, welche im betrachteten Beispiel der Flugbahn entsprechen, werden mittels Bézier-Kurven entworfen. Die Einhaltung der Ungleichungsnebenbedingungen wird durch ein Indikator-Integral überprüft, welches sich mit wenig Zusatzaufwand parallel zum Kostenfunktional berechnen lässt. Zwar konvergiert der RRT*-Algorithmus (im probabilistischen Sinne) zu einer optimalen Lösung, jedoch ist die Konvergenzrate aus praktischer Sicht unbrauchbar langsam. Es ist daher naheliegend, den Planer durch ein gradientenbasiertes lokales Optimierungsverfahren mit besseren Konvergenzeigenschaften zu unterstützen. Hierzu wird die aktuelle Zwischenlösung des Planers als Initialschätzung für ein kompatibles semi-infinites Optimierungsproblem verwendet. Der vorgeschlagene Optimierungsalgorithmus erweitert das verbreitete innere-Punkte-Konzept (primal dual interior point method) auf semi-infinite Probleme. Eine explizite Diskretisierung der funktionalen Ungleichungsnebenbedingungen ist nicht erforderlich, denn diese erfolgt implizit durch eine numerische Integralauswertung im Rahmen der linearen Teilprobleme. Da die Methode an Stellen aktiver Nebenbedingungen nicht wohldefiniert ist, kommt zusätzlich eine Variante des Reduktions-Ansatzes zum Einsatz, bei welcher der Vektor der Optimierungsvariablen um die (endliche) Menge der aktiven Indizes erweitert wird. Weiterhin wurde eine Kollisionsvermeidung integriert, die in den Teilschritt der Liniensuche eingreift und die Problemformulierung dynamisch um Randbedingungen zur lokalen Berücksichtigung von Hindernissen erweitert. Experimentelle Untersuchungen bestätigen, dass die Ergebnisse des hybriden Ansatzes aus RRT(*) und numerischem Optimierungsverfahren der klassischen RRT*-basierten Trajektorienoptimierung überlegen sind. Der erforderliche Rechenaufwand ist zwar beträchtlich, aber unter realistischen Bedingungen praktisch beherrschbar

    Coordinating decentralized learning and conflict resolution across agent boundaries

    Get PDF
    It is crucial for embedded systems to adapt to the dynamics of open environments. This adaptation process becomes especially challenging in the context of multiagent systems because of scalability, partial information accessibility and complex interaction of agents. It is a challenge for agents to learn good policies, when they need to plan and coordinate in uncertain, dynamic environments, especially when they have large state spaces. It is also critical for agents operating in a multiagent system (MAS) to resolve conflicts among the learned policies of different agents, since such conflicts may have detrimental influence on the overall performance. The focus of this research is to use a reinforcement learning based local optimization algorithm within each agent to learn multiagent policies in a decentralized fashion. These policies will allow each agent to adapt to changes in environmental conditions while reorganizing the underlying multiagent network when needed. The research takes an adaptive approach to resolving conflicts that can arise between locally optimal agent policies. First an algorithm that uses heuristic rules to locally resolve simple conflicts is presented. When the environment is more dynamic and uncertain, a mediator-based mechanism to resolve more complicated conflicts and selectively expand the agents' state space during the learning process is harnessed. For scenarios where mediator-based mechanisms with partially global views are ineffective, a more rigorous approach for global conflict resolution that synthesizes multiagent reinforcement learning (MARL) and distributed constraint optimization (DCOP) is developed. These mechanisms are evaluated in the context of a multiagent tornado tracking application called NetRads. Empirical results show that these mechanisms significantly improve the performance of the tornado tracking network for a variety of weather scenarios. The major contributions of this work are: a state of the art decentralized learning approach that supports agent interactions and reorganizes the underlying network when needed; the use of abstract classes of scenarios/states/actions that efficiently manages the exploration of the search space; novel conflict resolution algorithms of increasing complexity that use heuristic rules, sophisticated automated negotiation mechanisms and distributed constraint optimization methods respectively; and finally, a rigorous study of the interplay between two popular theories used to solve multiagent problems, namely decentralized Markov decision processes and distributed constraint optimization

    Configuring heterogeneous wireless sensor networks under quality-of-service constraints

    Get PDF
    Wireless sensor networks (WSNs) are useful for a diversity of applications, such as structural monitoring of buildings, farming, assistance in rescue operations, in-home entertainment systems or to monitor people's health. A WSN is a large collection of small sensor devices that provide a detailed view on all sides of the area or object one is interested in. A large variety of WSN hardware platforms is readily available these days. Many operating systems and protocols exist to support essential functionality such as communication, power management, data fusion, localisation, and much more. A typical sensor node has a number of settings that affect its behaviour and the function of the network itself, such as the transmission power of its radio and the number of measurements taken by its sensor per minute. As the number of nodes in a WSN may be very large, the collection of independent parameters in these networks – the configuration space – tends to be enormous. The user of the WSN would have certain expectations on the Quality of Service (QoS) of the network. A WSN is deployed for a specific purpose, and has a number of measurable properties that indicate how well the network's task is being performed. Examples of such quality metrics are the time needed for measured information to reach the user, the degree of coverage of the area, or the lifetime of the network. Each point in the configuration space of the network gives rise to a certain value in each of the quality metrics. The user may place constraints on the quality metrics, and wishes to optimise the configuration to meet their goals. Work on sensor networks often focuses on optimising only one metric at the time, ignoring the fact that improving one aspect of the system may deteriorate other important performance characteristics. The study of trade-offs between multiple quality metrics, and a method to optimally configure a WSN for several objectives simultaneously – until now a rather unexplored field – is the main contribution of this thesis. There are many steps involved in the realisation of a WSN that is fulfilling a task as desired. First of all, the task needs to be defined and specified, and appropriate hardware (sensor nodes) needs to be selected. After that, the network needs to be deployed and properly configured. This thesis deals with the configuration problem, starting with a possibly heterogeneous collection of nodes distributed in an area of interest, suitable models of the nodes and their interaction, and a set of task-level requirements in terms of quality metrics. We target the class of WSNs with a single data sink that use a routing tree for communication. We introduce two models of tasks running on a sensor network – target tracking and spatial mapping – which are used in the experiments in this thesis. The configuration process is split in a number of phases. After an initialisation phase to collect information about the network, the routing tree is formed in the second configuration phase. We explore the trade-off between two attributes of a tree: the average path length and the maximum node degree. These properties do not only affect the quality metrics, but also the complexity of the remaining optimisation trajectory. We introduce new algorithms to efficiently construct a shortest-path spanning tree in which all nodes have a degree not higher than a given target value. The next phase represents the core of the configuration method: it features a QoS optimiser that determines the Pareto-optimal configurations of the network given the routing tree. A configuration contains settings for the parameters of all nodes in the network, plus the metric values they give rise to. The Pareto-optimal configurations, also known as Pareto points, represent the best possible trade-offs between the quality metrics. Given the vastness of the configuration space, which is exponential in the size of the network, it is impossible to use a brute-force approach and try all possibilities. Still our method efficiently finds all Pareto points, by incrementally searching the configuration space, and discarding potential solutions immediately when they appear to be not Pareto optimal. An important condition for this to work is the ability to compute quality metrics for a group of nodes from the quality metrics of smaller groups of nodes. The precise requirements are derived and shown to hold for the example tasks. Experimental results show that the practical complexity of this algorithm is approximately linear in the number of nodes in the network, and thus scalable to very large networks. After computing the set of Pareto points, a configuration that satisfies the QoS constraints is selected, and the nodes are configured accordingly (the selection and loading phases). The configuration process can be executed in either a centralised or a distributed way. Centralised means that all computations are carried out on a central node, while the distributed algorithms do all the work on the sensor nodes themselves. Simulations show run times in the order of seconds for the centralised configuration of WSNs of hundreds of TelosB sensor nodes. The distributed algorithms take in the order of minutes for the same networks, but have a lower communication overhead. Hence, both approaches have their own pros and cons, and even a combination is possible in which the heavy work is performed by dedicated compute nodes spread across the network. Besides the trade-offs between quality metrics, there is a meta trade-off between the quality and the cost of the configuration process itself. A speed-up of the configuration process can be achieved in exchange for a reduction in the quality of the solutions. We provide complexity-control functionality to fine-tune this quality/cost trade-off. The methods described thus far configure a WSN given a fixed state (node locations, environmental conditions). WSNs, however, are notoriously dynamic during operation: nodes may move or run out of battery, channel conditions may fluctuate, or the demands from the user may change. The final part of this thesis describes methods to adapt the configuration to such dynamism at run time. Especially the case of a mobile sink is treated in detail. As frequently doing global reconfigurations would likely be too slow and too expensive, we use localised algorithms to maintain the routing tree and reconfigure the node parameters. Again, we are able to control the quality/cost trade-off, this time by adjusting the size of the locality in which the reconfiguration takes place. To conclude the thesis, a case study is presented, which highlights the use of the configuration method on a more complex example containing a lot of heterogeneity
    • …
    corecore