9 research outputs found

    Multi-Goal Multi-Agent Path Finding via Decoupled and Integrated Goal Vertex Ordering

    Full text link
    We introduce multi-goal multi agent path finding (MAPFMG^{MG}) which generalizes the standard discrete multi-agent path finding (MAPF) problem. While the task in MAPF is to navigate agents in an undirected graph from their starting vertices to one individual goal vertex per agent, MAPFMG^{MG} assigns each agent multiple goal vertices and the task is to visit each of them at least once. Solving MAPFMG^{MG} not only requires finding collision free paths for individual agents but also determining the order of visiting agent's goal vertices so that common objectives like the sum-of-costs are optimized. We suggest two novel algorithms using different paradigms to address MAPFMG^{MG}: a heuristic search-based search algorithm called Hamiltonian-CBS (HCBS) and a compilation-based algorithm built using the SMT paradigm, called SMT-Hamiltonian-CBS (SMT-HCBS). Experimental comparison suggests limitations of compilation-based approach

    Adaptive search techniques in AI planning and heuristic search

    Get PDF
    State-space search is a common approach to solve problems appearing in artificial intelligence and other subfields of computer science. In such problems, an agent must find a sequence of actions leading from an initial state to a goal state. However, the state spaces of practical applications are often too large to explore exhaustively. Hence, heuristic functions that estimate the distance to a goal state (such as straight-line distance for navigation tasks) are used to guide the search more effectively. Heuristic search is typically viewed as a static process. The heuristic function is assumed to be unchanged throughout the search, and its resulting values are directly used for guidance without applying any further reasoning to them. Yet critical aspects of the task may only be discovered during the search, e.g., regions of the state space where the heuristic does not yield reliable values. Our work here aims to make this process more dynamic, allowing the search to adapt to such observations. One form of adaptation that we consider is online refinement of the heuristic function. We design search algorithms that detect weaknesses in the heuristic, and address them with targeted refinement operations. If the heuristic converges to perfect estimates, this results in a secondary method of progress, causing search algorithms that are otherwise incomplete to eventually find a solution. We also consider settings that inherently require adaptation: In online replanning, a plan that is being executed must be amended for changes in the environment. Similarly, in real-time search, an agent must act under strict time constraints with limited information. The search algorithms we introduce in this work share a common pattern of online adaptation, allowing them to effectively react to challenges encountered during the search. We evaluate our contributions on a wide range of standard benchmarks. Our results show that the flexibility of these algorithms makes them more robust than traditional approaches, and they often yield substantial improvements over current state-of-the-art planners.Die Zustandsraumsuche ist ein oft verwendeter Ansatz um verschiedene Probleme zu lösen, die in der Künstlichen Intelligenz und anderen Bereichen der Informatik auftreten. Dabei muss ein Akteur eine Folge von Aktionen finden, die einen Pfad von einem Startzustand zu einem Zielzustand bilden. Die Zustandsräume von praktischen Anwendungen sind häufig zu groß um sie vollständig zu durchsuchen. Aus diesem Grund leitet man die Suche mit Heuristiken, die die Distanz zu einem Zielzustand abschätzen; zum Beispiel lässt sich die Luftliniendistanz als Heuristik für Navigationsprobleme einsetzen. Heuristische Suche wird typischerweise als statischer Prozess angesehen. Man nimmt an, dass die Heuristik während der Suche eine unveränderte Funktion ist, und die resultierenden Werte werden direkt zur Leitung der Suche benutzt ohne weitere Logik darauf anzuwenden. Jedoch könnten kritische Aspekte des Problems erst im Laufe der Suche erkannt werden, wie zum Beispiel Bereiche des Zustandsraums in denen die Heuristik keine verlässlichen Abschätzungen liefert. In dieser Arbeit wird der Suchprozess dynamischer gestaltet und der Suche ermöglicht sich solchen Beobachtungen anzupassen. Eine Art dieser Anpassung ist die Onlineverbesserung der Heuristik. Es werden Suchalgorithmen entwickelt, die Schwächen in der Heuristik erkennen und mit gezielten Verbesserungsoperationen beheben. Wenn die Heuristik zu perfekten Werten konvergiert ergibt sich daraus eine zusätzliche Form von Fortschritt, wodurch auch Suchalgorithmen, die sonst unvollständig sind, garantiert irgendwann eine Lösung finden werden. Es werden auch Szenarien betrachtet, die schon von sich aus Anpassung erfordern: In der Onlineumplanung muss ein Plan, der gerade ausgeführt wird, auf Änderungen in der Umgebung angepasst werden. Ähnlich dazu muss sich ein Akteur in der Echtzeitsuche unter strengen Zeitauflagen und mit eingeschränkten Informationen bewegen. Die Suchalgorithmen, die in dieser Arbeit eingeführt werden, folgen einem gemeinsamen Muster von Onlineanpassung, was ihnen ermöglicht effektiv auf Herausforderungen zu reagieren die im Verlauf der Suche aufkommen. Diese Ansätze werden auf einer breiten Reihe von Benchmarks ausgewertet. Die Ergebnisse zeigen, dass die Flexibilität dieser Algorithmen zu erhöhter Zuverlässigkeit im Vergleich zu traditionellen Ansätzen führt, und es werden oft deutliche Verbesserungen gegenüber modernen Planungssystemen erzielt.DFG grant 389792660 as part of TRR 248 – CPEC (see https://perspicuous-computing.science), and DFG grant HO 2169/5-1, "Critically Constrained Planning via Partial Delete Relaxation

    Quantum Machine Intelligence: Mapping AI Applications

    Get PDF

    Solving hard subgraph problems in parallel

    Get PDF
    This thesis improves the state of the art in exact, practical algorithms for finding subgraphs. We study maximum clique, subgraph isomorphism, and maximum common subgraph problems. These are widely applicable: within computing science, subgraph problems arise in document clustering, computer vision, the design of communication protocols, model checking, compiler code generation, malware detection, cryptography, and robotics; beyond, applications occur in biochemistry, electrical engineering, mathematics, law enforcement, fraud detection, fault diagnosis, manufacturing, and sociology. We therefore consider both the ``pure'' forms of these problems, and variants with labels and other domain-specific constraints. Although subgraph-finding should theoretically be hard, the constraint-based search algorithms we discuss can easily solve real-world instances involving graphs with thousands of vertices, and millions of edges. We therefore ask: is it possible to generate ``really hard'' instances for these problems, and if so, what can we learn? By extending research into combinatorial phase transition phenomena, we develop a better understanding of branching heuristics, as well as highlighting a serious flaw in the design of graph database systems. This thesis also demonstrates how to exploit two of the kinds of parallelism offered by current computer hardware. Bit parallelism allows us to carry out operations on whole sets of vertices in a single instruction---this is largely routine. Thread parallelism, to make use of the multiple cores offered by all modern processors, is more complex. We suggest three desirable performance characteristics that we would like when introducing thread parallelism: lack of risk (parallel cannot be exponentially slower than sequential), scalability (adding more processing cores cannot make runtimes worse), and reproducibility (the same instance on the same hardware will take roughly the same time every time it is run). We then detail the difficulties in guaranteeing these characteristics when using modern algorithmic techniques. Besides ensuring that parallelism cannot make things worse, we also increase the likelihood of it making things better. We compare randomised work stealing to new tailored strategies, and perform experiments to identify the factors contributing to good speedups. We show that whilst load balancing is difficult, the primary factor influencing the results is the interaction between branching heuristics and parallelism. By using parallelism to explicitly offset the commitment made to weak early branching choices, we obtain parallel subgraph solvers which are substantially and consistently better than the best sequential algorithms
    corecore