138 research outputs found

    An Empirical Study of Perfect Potential Heuristics

    Get PDF
    Potential heuristics are weighted functions over state features of a planning task. A recent study defines the complexity of a task as the minimum required feature complexity for a potential heuristic that makes a search backtrack-free. This gives an indication of how complex potential heuristics need to be to achieve good results in satisficing planning. However, these results do not directly transfer to optimal planning. In this paper, we empirically study how complex potential heuristics must be to represent the perfect heuristic and how close to perfect heuristics can get with a limited number of features. We aim to identify the practical trade-offs between size, complexity and time for the quality of potential heuristics. Our results show that, even for simple planning tasks, finding perfect potential heuristics might be harder than expected

    Mechanically Proving Guarantees of Generalized Heuristics: First Results and Ongoing Work

    Get PDF
    The goal of generalized planning is to find a solution that works for all tasks of a specific planning domain. Ideally, this solution is also efficient (i.e., polynomial) in all tasks. One possible approach is to learn such a solution from training examples and then prove that this generalizes for any given task. However, such proofs are usually pen-and-paper proofs written by a human. In our paper, we aim at automating these proofs so we can use a theorem prover to show that a solution generalizes for any task. Furthermore, we want to prove that this generalization works while still preserving efficiency. Our focus is on generalized potential heuristics encoding tiered measures of progress, which can be proven to lead to a find in a polynomial number of steps in all tasks of a domain. We show our ongoing work in this direction using the interactive theorem prover Isabelle/HOL. We illustrate the key aspects of our implementation using the Miconic domain and then discuss possible obstacles and challenges to fully automating this pipeline

    The FF Heuristic for Lifted Classical Planning

    Get PDF
    Heuristics for lifted planning are not yet as informed as the best heuristics for ground planning. Recent work introduced the idea of using Datalog programs to compute the additive heuristic over lifted tasks. Based on this work, we show how to compute the more informed FF heuristic in a lifted manner. We extend the Datalog program with executable annotations that can also be used to define other delete-relaxation heuristics. In our experiments, we show that a planner using the lifted FF implementation produces state-of-the-art results for lifted planners. It also reduces the gap to state-of-the-art ground planners in domains where grounding is feasible

    Lifted Successor Generation using Query Optimization Techniques

    Get PDF
    The standard PDDL language for classical planning uses sev eral first-order features, such as schematic actions. Yet, most classical planners ground this first-order representation into a propositional one as a preprocessing step. While this simpli fies the design of other parts of the planner, in several bench- marks the grounding process causes an exponential blowup that puts otherwise solvable tasks out of reach of the planners. In this work, we take a step towards planning with lifted representations . We tackle the successor generation task, a key operation in forward-search planning, directly on the lifted representation using well-known techniques from database theory . We show how computing the variable substitutions that make an action schema applicable in a given state is essentially a query evaluation problem. Interestingly, a large number of the action schemas in the standard benchmarks result in acyclic conjunctive queries, for which query evaluation is tractable. Our empirical results show that our approach is competitive with the standard (grounded) successor generation techniques in a few domains and outperforms them on benchmarks where grounding is challenging or infeasible

    Generalized Potential Heuristics for Classical Planning

    Get PDF
    Generalized planning aims at computing solutions that work for all instances of the same domain. In this paper, we show that several interesting planning domains possess compact generalized heuristics that can guide a greedy search in guaranteed polynomial time to the goal, and which work for any instance of the domain . These heuristics are weighted sums of state features that capture the number of objects satisfying a certain first-order logic property in any given state. These features have a meaningful interpretation and generalize naturally to the whole domain. Additionally, we present an approach based on mixed integer linear programming to compute such heuristics automatically from the observation of small training instances. We develop two variations of the approach that progressively refine the heuristic as new states are encountered. We illustrate the approach empirically on a number of standard domains, where we show that the generated heuristics will correctly generalize to all possible instances

    Machetli: Simplifying Input Files for Debugging

    Get PDF
    Debugging can be a painful task, especially when bugs only occur for large input files. We present Machetli, a tool to help with debugging in such situations. It takes a large input file and cuts away parts of it, while still provoking the bug. The resulting file is much smaller than the original, making the bug easier to find and fix. In our experience, Machetli was able to reduce planning tasks with thousands of actions to trivial tasks that could even be solved by hand. Machetli is an open-source project and it can be extended to other use cases such as debugging SAT solvers or LaTeX compilation bugs

    Relaxed Decision Diagrams for Delete-Free Planning

    Get PDF
    In this work, we investigate the computation of optimal plans for delete-free tasks using relaxed decision diagrams. We introduce a new method to compute approximations of h+ in polynomial time on the width of the RDD and show that this approximation is a lower bound for the optimal solution. We analyze different strategies to construct decision diagrams and compare these approximations to h+

    Forest structure, diversity, and primary production in relation to disturbance severity

    Full text link
    Differential disturbance severity effects on forest vegetation structure, species diversity, and net primary production (NPP) have been long theorized and observed. Here, we examined these factors concurrently to explore the potential for a mechanistic pathway linking disturbance severity, changes in light environment, leaf functional response, and wood NPP in a temperate hardwood forest.Using a suite of measurements spanning an experimental gradient of tree mortality, we evaluated the direction and magnitude of change in vegetation structural and diversity indexes in relation to wood NPP. Informed by prior observations, we hypothesized that forest structural and species diversity changes and wood NPP would exhibit either a linear, unimodal, or threshold response in relation to disturbance severity. We expected increasing disturbance severity would progressively shift subcanopy light availability and leaf traits, thereby coupling structural and species diversity changes with primary production.Linear or unimodal changes in three of four vegetation structural indexes were observed across the gradient in disturbance severity. However, disturbance‐related changes in vegetation structure were not consistently correlated with shifts in light environment, leaf traits, and wood NPP. Species diversity indexes did not change in response to rising disturbance severity.We conclude that, in our study system, the sensitivity of wood NPP to rising disturbance severity is generally tied to changing vegetation structure but not species diversity. Changes in vegetation structure are inconsistently coupled with light environment and leaf traits, resulting in mixed support for our hypothesized cascade linking disturbance severity to wood NPP.We examined multiple metrics of vegetation structural and biological diversity concurrently to explore the potential for a mechanistic pathway linking disturbance severity, changes in light environment, leaf functional response, and wood NPP in a temperate hardwood forest. Significant linear or unimodal changes in three of four vegetation structural indexes were observed across the gradient in disturbance severity, although disturbance‐related changes in vegetation structure were not consistently correlated with shifts in light environment, leaf traits, and wood NPP. We conclude that, in our study system, the sensitivity of wood NPP to rising disturbance severity is generally tied to changing vegetation structure but not species diversity.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/155474/1/ece36209.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/155474/2/ece36209_am.pd

    A proposed architecture and method of operation for improving the protection of privacy and confidentiality in disease registers

    Get PDF
    BACKGROUND: Disease registers aim to collect information about all instances of a disease or condition in a defined population of individuals. Traditionally methods of operating disease registers have required that notifications of cases be identified by unique identifiers such as social security number or national identification number, or by ensembles of non-unique identifying data items, such as name, sex and date of birth. However, growing concern over the privacy and confidentiality aspects of disease registers may hinder their future operation. Technical solutions to these legitimate concerns are needed. DISCUSSION: An alternative method of operation is proposed which involves splitting the personal identifiers from the medical details at the source of notification, and separately encrypting each part using asymmetrical (public key) cryptographic methods. The identifying information is sent to a single Population Register, and the medical details to the relevant disease register. The Population Register uses probabilistic record linkage to assign a unique personal identification (UPI) number to each person notified to it, although not necessarily everyone in the entire population. This UPI is shared only with a single trusted third party whose sole function is to translate between this UPI and separate series of personal identification numbers which are specific to each disease register. SUMMARY: The system proposed would significantly improve the protection of privacy and confidentiality, while still allowing the efficient linkage of records between disease registers, under the control and supervision of the trusted third party and independent ethics committees. The proposed architecture could accommodate genetic databases and tissue banks as well as a wide range of other health and social data collections. It is important that proposals such as this are subject to widespread scrutiny by information security experts, researchers and interested members of the general public, alike

    Some methods for blindfolded record linkage

    Get PDF
    BACKGROUND: The linkage of records which refer to the same entity in separate data collections is a common requirement in public health and biomedical research. Traditionally, record linkage techniques have required that all the identifying data in which links are sought be revealed to at least one party, often a third party. This necessarily invades personal privacy and requires complete trust in the intentions of that party and their ability to maintain security and confidentiality. Dusserre, Quantin, Bouzelat and colleagues have demonstrated that it is possible to use secure one-way hash transformations to carry out follow-up epidemiological studies without any party having to reveal identifying information about any of the subjects – a technique which we refer to as "blindfolded record linkage". A limitation of their method is that only exact comparisons of values are possible, although phonetic encoding of names and other strings can be used to allow for some types of typographical variation and data errors. METHODS: A method is described which permits the calculation of a general similarity measure, the n-gram score, without having to reveal the data being compared, albeit at some cost in computation and data communication. This method can be combined with public key cryptography and automatic estimation of linkage model parameters to create an overall system for blindfolded record linkage. RESULTS: The system described offers good protection against misdeeds or security failures by any one party, but remains vulnerable to collusion between or simultaneous compromise of two or more parties involved in the linkage operation. In order to reduce the likelihood of this, the use of last-minute allocation of tasks to substitutable servers is proposed. Proof-of-concept computer programmes written in the Python programming language are provided to illustrate the similarity comparison protocol. CONCLUSION: Although the protocols described in this paper are not unconditionally secure, they do suggest the feasibility, with the aid of modern cryptographic techniques and high speed communication networks, of a general purpose probabilistic record linkage system which permits record linkage studies to be carried out with negligible risk of invasion of personal privacy
    • 

    corecore