101 research outputs found

    Planning with Critical Section Macros:Theory and Practice

    Get PDF

    Generation and exploitation of intermediate goals in automated planning

    Get PDF
    Mención Internacional en el título de doctorIn automated planning, domain-independent planners often scale poorly. This is due to the exponential blow up of the effort necessary to solve a planning task as its size increases. One of the most popular ways of addressing this problem is splitting the planning problem into several smaller ones. Each subproblem is in theory exponentially easier to solve than the original one, so planners that divide the original task will tend to scale much better. To divide the task into smaller ones, we need to find domain-independent methods to derive intermediate goals. In this thesis we will study different approaches that generate and exploit intermediate goals, without limiting ourselves to simply splitting the original problem. Three main lines of research will be pursued. The first one deals with regression, first tackling its shortcomings and then using it both in bidirectional search and as a way to derive novel heuristics based on intermediate goals. In the second one we propose sampling the search space randomly and using the randomly-sampled subgoals in a tree-like algorithms that effectively balances exploration and exploitation. Finally, in the third one we study the properties of the landmark graph, which represents precedence constraints among subgoals of the task. As a contribution, we propose different characterizations of the landmark graph that improve over its original formulation by providing more information, both formal properties of the task and finer orderings of subgoals exploitable by planners that already use landmarks. ----------------------------------------------------------En planificación automática, los planificadores independientes de dominio a menudo escalan pobremente. Esto se debe a la explosión exponencial del esfuerzo necesario para resolver una tarea de planificación según su tamaño incrementa. Uno de las formas más populares de abordar este problema es dividiendo el problema de planificación en varios problemas más pequeños. Para separar la tarea en tareas más pequeñas, hay que encontrar métodos independientes de dominio capaces de derivar metas intermedias. En esta tesis se estudiarán diferentes aproximaciones que generen y aprovechen metas intermedias, sin limitarnos a una mera subdivisión del problema original. Tres líneas de investigación serán exploradas. La primera trata sobre regresión, primero encarando sus limitaciones y después usándola tanto en búsqueda bidireccional como en nuevas heurísticas basadas en metas intermedias. En la segunda línea proponemos muestrear aleatoriamente el espacio de búsqueda y usar las submetas muestreadas aleatoriamente en un algoritmo basado en árboles aleatorios que balancea exploración y explotación de forma efectiva. Finalmente, en la tercera línea de investigación estudiamos las propiedades del grafo de landmarks, el cual representa las restricciones de precedencia entre submetas de la tarea. Como contribución, proponemos diferentes caracterizaciones del grafo de landmarks que mejoran su formulación original proporcionando más información, tanto propiedades formales de la tarea como ordenaciones de submetas más informadas aprovechables por planificadores que emplean landmarks.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: José Manuel Molina López.- Secretario: Héctor Geffner.- Vocal: Joerg Hoffman

    Semantics-aware planning methodology for automatic web service composition

    Get PDF
    Service-Oriented Computing (SOC) has been a major research topic in the past years. It is based on the idea of composing distributed applications even in heterogeneous environments by discovering and invoking network-available Web Services to accomplish some complex tasks when no existing service can satisfy the user request. Service-Oriented Architecture (SOA) is a key design principle to facilitate building of these autonomous, platform-independent Web Services. However, in distributed environments, the use of services without considering their underlying semantics, either functional semantics or quality guarantees can negatively affect a composition process by raising intermittent failures or leading to slow performance. More recently, Artificial Intelligence (AI) Planning technologies have been exploited to facilitate the automated composition. But most of the AI planning based algorithms do not scale well when the number of Web Services increases, and there is no guarantee that a solution for a composition problem will be found even if it exists. AI Planning Graph tries to address various limitations in traditional AI planning by providing a unique search space in a directed layered graph. However, the existing AI Planning Graph algorithm only focuses on finding complete solutions without taking account of other services which are not achieving the goals. It will result in the failure of creating such a graph in the case that many services are available, despite most of them being irrelevant to the goals. This dissertation puts forward a concept of building a more intelligent planning mechanism which should be a combination of semantics-aware service selection and a goal-directed planning algorithm. Based on this concept, a new planning system so-called Semantics Enhanced web service Mining (SEwsMining) has been developed. Semantic-aware service selection is achieved by calculating on-demand multi-attributes semantics similarity based on semantic annotations (QWSMO-Lite). The planning algorithm is a substantial revision of the AI GraphPlan algorithm. To reduce the size of planning graph, a bi-directional planning strategy has been developed

    Lifted Successor Generation using Query Optimization Techniques

    Get PDF
    The standard PDDL language for classical planning uses sev eral first-order features, such as schematic actions. Yet, most classical planners ground this first-order representation into a propositional one as a preprocessing step. While this simpli fies the design of other parts of the planner, in several bench- marks the grounding process causes an exponential blowup that puts otherwise solvable tasks out of reach of the planners. In this work, we take a step towards planning with lifted representations . We tackle the successor generation task, a key operation in forward-search planning, directly on the lifted representation using well-known techniques from database theory . We show how computing the variable substitutions that make an action schema applicable in a given state is essentially a query evaluation problem. Interestingly, a large number of the action schemas in the standard benchmarks result in acyclic conjunctive queries, for which query evaluation is tractable. Our empirical results show that our approach is competitive with the standard (grounded) successor generation techniques in a few domains and outperforms them on benchmarks where grounding is challenging or infeasible

    Symbolic search and abstraction heuristics for cost-optimal planning in automated planning

    Get PDF
    Mención Internacional en el título de doctorLa Planificación Automática puede ser definida como el problema de encontrar una secuencia de acciones (un plan) para conseguir una meta, desde un punto inicial, asumiendo que las acciones tienen efectos deterministas. La Planificación Automática es independiente de dominio porque los planificadores toman como información inicial una descripción del problema y deben resolverlo sin ninguna información adicional. Esta tesis trata en particular de planificación automática ´optima, en la cual las acciones tienen un coste asociado. Los planificadores óptimos deben encontrar un plan y probar que no existe ningún otro plan de menor coste. La mayoría de los planificadores óptimos están basados en la búsqueda de estados explícita. Sin lugar a dudas, esta aproximación ha sido la dominante en planificación automática óptima durante los últimos años. No obstante, la búsqueda simbólica se presenta como una alternativa interesante. En esta tesis, proponemos dos mejoras ortogonales para la planificación basada en búsqueda simbólica. En primer lugar, estudiamos diferentes métodos para mejorar la computación de la “imagen”, operación que calcula el conjunto de estados sucesores a partir de un conjunto de estados. Posteriormente, analizamos cómo explotar las invariantes de estado para mejorar el rendimiento de la búsqueda simbólica. Estas propuestas suponen una mejora significativa en el desempeño de los algoritmos simbólicos en la mayoría de los dominios analizados. Hemos analizado dos tipos de heurísticas de abstracción con el objetivo de extrapolar las mejoras que se han realizado en la búsqueda explícita durante los últimos años a la búsqueda simbólica. Las heurísticas analizadas son: las bases de datos de patrones (pattern databases, PDBs) y una generalización de estas, mergeand-shrink (M&S). Mientras que las PDBs se han utilizado con anterioridad en búsqueda simbólica, hemos estudiado el uso de M&S, que es más general. En esta tesis se muestra que determinados tipos de heurísticas de M&S (aquellas que son generadas mediante una estrategia de “merge” lineal) pueden ser representadas como BDDs, con un coste computacional polinomial en el tamaño de la abstracción y la descripción del problema; y por lo tanto, pueden ser utilizadas de forma eficiente en la búsqueda simbólica. También proponemos una nueva heurística”symbolic perimeter merge-andshrink” (SPM&S) que combina la fuerza de la búsqueda hacia atrás simbólica con la flexibilidad de M&S. Los resultados experimentales muestran que SPM&S es capaz de superar, no solo las dos técnicas que combina, sino también otras heurísticas del estado del arte. Finalmente, hemos integrado las abstracciones simbólicas de perímetro, SPM&S, en la búsqueda simbólica bidireccional. En resumen, esta tesis estudia diferentes propuestas para planificación óptima basada en Búsqueda simbólica. Hemos implementado diferentes planificadores simbólicos basados en la Búsqueda bidireccional y las abstracciones de perímetro. Los resultados experimentales muestran cómo los planificadores presentados como resultado de este trabajo son altamente competitivos y frecuentemente superan al resto de planificadores del estado del arte.Domain-independent planning is the problem of finding a sequence of actions for achieving a goal from an initial state assuming that actions have deterministic effects. It is domain-independent because planners take as input the description of a problem and must solve it without any additional information. In this thesis, we deal with cost-optimal planning problems, in which actions have an associated cost and the planner must find a plan and prove that no other plan of lower cost exists. Most cost-optimal planners are based on explicit-state search. While this has undoubtedly been the dominant approach to cost-optimal planning in the last years, symbolic search is an interesting alternative. In symbolic search, sets of states are succinctly represented as binary decision diagrams, BDDs. The BDD representation does not only reduce the memory needed to store sets of states, but also allows the planner to efficiently manipulate sets of states reducing the search time. We propose two orthogonal enhancements for symbolic search planning. On the one hand, we study different methods for image computation, which usually is the bottleneck of symbolic search planners. On the other hand, we analyze how to exploit state invariants to prune symbolic search. Our techniques significantly improve the performance of symbolic search algorithms in most benchmark domains. Moreover, the enhanced version of symbolic bidirectional search is one of the strongest approaches to domain-independent planning even though it does not use any heuristic. Explicit-state search planners are commonly guided with admissible heuristics, which optimistically estimate the cost from any state to the goal. Heuristics are automatically derived from the problem description and can be classified into different families according to their underlying ideas. In order to bring the improvements on heuristics that have been made in explicit-state search to symbolic search, we analyze two types of abstraction heuristics: pattern databases (PDBs) and a generalization of them, merge-and-shrink (M&S). While PDBs had already been used in symbolic search, we analyze the use of the more general M&S heuristics. We show that certain types of M&S heuristics (those generated with a linear merging strategy) can be represented as BDDs with at most a polynomial overhead and, thus, efficiently used in symbolic search. We also propose a new heuristic, symbolic perimeter merge-and-shrink (SPM&S) that combines the strength of symbolic regression search with the flexibility of M&S heuristics. Our experiments show that SPM&S is able to beat, not only the two techniques it combines, but also other state-of-the-art heuristics. Finally, we integrate our symbolic perimeter abstraction heuristics in symbolic bidirectional search. The heuristic used by the bidirectional search is computed by means of another symbolic bidirectional search in an abstract state space. We show how, even though the combination of symbolic bidirectional search and abstraction heuristics has an overall performance similar to the simpler symbolic bidirectional blind search, it can sometimes solve more problems in particular domains. In summary, this thesis studies different enhancements on symbolic search. We implement different symbolic search planners based on bidirectional search and perimeter abstraction heuristics. Experimental results show that the resulting planners are highly competitive and often outperform other state-of-the-art planners.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: José Manuel Molina López..- Vocal: Malte Helmert .- Secretario: Andrés Jonsso

    Simple Algorithm for Simple Timed Games

    Get PDF
    version 1.1We propose a subclass of timed game automata (TGA), called Task TGA, representing networks of communicating tasks where the system can choose when to start the task and the environment can choose the duration of the task. We search to solve finite-horizon reachability games on Task TGA by building strategies in the form of Simple Temporal Networks with Uncertainty (STNU). Such strategies have the advantage of being very succinct due to the partial order reduction of independent tasks. We show that the existence of such strategies is an NP-complete problem. A practical consequence of this result is a fully forward algorithm for building STNU strategies. Potential applications of this work are planning and scheduling under temporal uncertainty

    A Review of Symbolic, Subsymbolic and Hybrid Methods for Sequential Decision Making

    Full text link
    The field of Sequential Decision Making (SDM) provides tools for solving Sequential Decision Processes (SDPs), where an agent must make a series of decisions in order to complete a task or achieve a goal. Historically, two competing SDM paradigms have view for supremacy. Automated Planning (AP) proposes to solve SDPs by performing a reasoning process over a model of the world, often represented symbolically. Conversely, Reinforcement Learning (RL) proposes to learn the solution of the SDP from data, without a world model, and represent the learned knowledge subsymbolically. In the spirit of reconciliation, we provide a review of symbolic, subsymbolic and hybrid methods for SDM. We cover both methods for solving SDPs (e.g., AP, RL and techniques that learn to plan) and for learning aspects of their structure (e.g., world models, state invariants and landmarks). To the best of our knowledge, no other review in the field provides the same scope. As an additional contribution, we discuss what properties an ideal method for SDM should exhibit and argue that neurosymbolic AI is the current approach which most closely resembles this ideal method. Finally, we outline several proposals to advance the field of SDM via the integration of symbolic and subsymbolic AI

    Variational methods and its applications to computer vision

    Get PDF
    Many computer vision applications such as image segmentation can be formulated in a ''variational'' way as energy minimization problems. Unfortunately, the computational task of minimizing these energies is usually difficult as it generally involves non convex functions in a space with thousands of dimensions and often the associated combinatorial problems are NP-hard to solve. Furthermore, they are ill-posed inverse problems and therefore are extremely sensitive to perturbations (e.g. noise). For this reason in order to compute a physically reliable approximation from given noisy data, it is necessary to incorporate into the mathematical model appropriate regularizations that require complex computations. The main aim of this work is to describe variational segmentation methods that are particularly effective for curvilinear structures. Due to their complex geometry, classical regularization techniques cannot be adopted because they lead to the loss of most of low contrasted details. In contrast, the proposed method not only better preserves curvilinear structures, but also reconnects some parts that may have been disconnected by noise. Moreover, it can be easily extensible to graphs and successfully applied to different types of data such as medical imagery (i.e. vessels, hearth coronaries etc), material samples (i.e. concrete) and satellite signals (i.e. streets, rivers etc.). In particular, we will show results and performances about an implementation targeting new generation of High Performance Computing (HPC) architectures where different types of coprocessors cooperate. The involved dataset consists of approximately 200 images of cracks, captured in three different tunnels by a robotic machine designed for the European ROBO-SPECT project.Open Acces

    OSCAR. A Noise Injection Framework for Testing Concurrent Software

    Get PDF
    “Moore’s Law” is a well-known observable phenomenon in computer science that describes a visible yearly pattern in processor’s die increase. Even though it has held true for the last 57 years, thermal limitations on how much a processor’s core frequencies can be increased, have led to physical limitations to their performance scaling. The industry has since then shifted towards multicore architectures, which offer much better and scalable performance, while in turn forcing programmers to adopt the concurrent programming paradigm when designing new software, if they wish to make use of this added performance. The use of this paradigm comes with the unfortunate downside of the sudden appearance of a plethora of additional errors in their programs, stemming directly from their (poor) use of concurrency techniques. Furthermore, these concurrent programs themselves are notoriously hard to design and to verify their correctness, with researchers continuously developing new, more effective and effi- cient methods of doing so. Noise injection, the theme of this dissertation, is one such method. It relies on the “probe effect” — the observable shift in the behaviour of concurrent programs upon the introduction of noise into their routines. The abandonment of ConTest, a popular proprietary and closed-source noise injection framework, for testing concurrent software written using the Java programming language, has left a void in the availability of noise injection frameworks for this programming language. To mitigate this void, this dissertation proposes OSCAR — a novel open-source noise injection framework for the Java programming language, relying on static bytecode instrumentation for injecting noise. OSCAR will provide a free and well-documented noise injection tool for research, pedagogical and industry usage. Additionally, we propose a novel taxonomy for categorizing new and existing noise injection heuristics, together with a new method for generating and analysing concurrent software traces, based on string comparison metrics. After noising programs from the IBM Concurrent Benchmark with different heuristics, we observed that OSCAR is highly effective in increasing the coverage of the interleaving space, and that the different heuristics provide diverse trade-offs on the cost and benefit (time/coverage) of the noise injection process.Resumo A “Lei de Moore” é um fenómeno, bem conhecido na área das ciências da computação, que descreve um padrão evidente no aumento anual da densidade de transístores num processador. Mesmo mantendo-se válido nos últimos 57 anos, o aumento do desempenho dos processadores continua garrotado pelas limitações térmicas inerentes `a subida da sua frequência de funciona- mento. Desde então, a industria transitou para arquiteturas multi núcleo, com significativamente melhor e mais escalável desempenho, mas obrigando os programadores a adotar o paradigma de programação concorrente ao desenhar os seus novos programas, para poderem aproveitar o desempenho adicional que advém do seu uso. O uso deste paradigma, no entanto, traz consigo, por consequência, a introdução de uma panóplia de novos erros nos programas, decorrentes diretamente da utilização (inadequada) de técnicas de programação concorrente. Adicionalmente, estes programas concorrentes são conhecidos por serem consideravelmente mais difíceis de desenhar e de validar, quanto ao seu correto funcionamento, incentivando investi- gadores ao desenvolvimento de novos métodos mais eficientes e eficazes de o fazerem. A injeção de ruído, o tema principal desta dissertação, é um destes métodos. Esta baseia-se no “efeito sonda” (do inglês “probe effect”) — caracterizado por uma mudança de comportamento observável em programas concorrentes, ao terem ruído introduzido nas suas rotinas. Com o abandono do Con- Test, uma framework popular, proprietária e de código fechado, de análise dinâmica de programas concorrentes através de injecção de ruído, escritos com recurso `a linguagem de programação Java, viu-se surgir um vazio na oferta de framework de injeção de ruído, para esta mesma linguagem. Para mitigar este vazio, esta dissertação propõe o OSCAR — uma nova framework de injeção de ruído, de código-aberto, para a linguagem de programação Java, que utiliza manipulação estática de bytecode para realizar a introdução de ruído. O OSCAR pretende oferecer uma ferramenta livre e bem documentada de injeção de ruído para fins de investigação, pedagógicos ou até para a indústria. Adicionalmente, a dissertação propõe uma nova taxonomia para categorizar os dife- rentes tipos de heurísticas de injecção de ruídos novos e existentes, juntamente com um método para gerar e analisar traces de programas concorrentes, com base em métricas de comparação de strings. Após inserir ruído em programas do IBM Concurrent Benchmark, com diversas heurísticas, ob- servámos que o OSCAR consegue aumentar significativamente a dimensão da cobertura do espaço de estados de programas concorrentes. Adicionalmente, verificou-se que diferentes heurísticas produzem um leque variado de prós e contras, especialmente em termos de eficácia versus eficiência
    corecore