2,318 research outputs found

    PDDLStream: Integrating Symbolic Planners and Blackbox Samplers via Optimistic Adaptive Planning

    Full text link
    Many planning applications involve complex relationships defined on high-dimensional, continuous variables. For example, robotic manipulation requires planning with kinematic, collision, visibility, and motion constraints involving robot configurations, object poses, and robot trajectories. These constraints typically require specialized procedures to sample satisfying values. We extend PDDL to support a generic, declarative specification for these procedures that treats their implementation as black boxes. We provide domain-independent algorithms that reduce PDDLStream problems to a sequence of finite PDDL problems. We also introduce an algorithm that dynamically balances exploring new candidate plans and exploiting existing ones. This enables the algorithm to greedily search the space of parameter bindings to more quickly solve tightly-constrained problems as well as locally optimize to produce low-cost solutions. We evaluate our algorithms on three simulated robotic planning domains as well as several real-world robotic tasks.Comment: International Conference on Automated Planning and Scheduling (ICAPS) 202

    Certified Roundoff Error Bounds Using Semidefinite Programming.

    Get PDF
    Roundoff errors cannot be avoided when implementing numerical programs with finite precision. The ability to reason about rounding is especially important if one wants to explore a range of potential representations, for instance for FPGAs or custom hardware implementation. This problem becomes challenging when the program does not employ solely linear operations as non-linearities are inherent to many interesting computational problems in real-world applications. Existing solutions to reasoning are limited in the presence of nonlinear correlations between variables, leading to either imprecise bounds or high analysis time. Furthermore, while it is easy to implement a straightforward method such as interval arithmetic, sophisticated techniques are less straightforward to implement in a formal setting. Thus there is a need for methods which output certificates that can be formally validated inside a proof assistant. We present a framework to provide upper bounds on absolute roundoff errors. This framework is based on optimization techniques employing semidefinite programming and sums of squares certificates, which can be formally checked inside the Coq theorem prover. Our tool covers a wide range of nonlinear programs, including polynomials and transcendental operations as well as conditional statements. We illustrate the efficiency and precision of this tool on non-trivial programs coming from biology, optimization and space control. Our tool produces more precise error bounds for 37 percent of all programs and yields better performance in 73 percent of all programs

    Optimal stability polynomials for numerical integration of initial value problems

    Full text link
    We consider the problem of finding optimally stable polynomial approximations to the exponential for application to one-step integration of initial value ordinary and partial differential equations. The objective is to find the largest stable step size and corresponding method for a given problem when the spectrum of the initial value problem is known. The problem is expressed in terms of a general least deviation feasibility problem. Its solution is obtained by a new fast, accurate, and robust algorithm based on convex optimization techniques. Global convergence of the algorithm is proven in the case that the order of approximation is one and in the case that the spectrum encloses a starlike region. Examples demonstrate the effectiveness of the proposed algorithm even when these conditions are not satisfied

    Interval Constraint-Based Mutation Testing of Numerical Specifications

    Get PDF

    When Interval Analysis Helps Inter-Block Backtracking

    Get PDF
    International audienceInter-block backtracking (IBB) computes all the solutions of sparse systems of non-linear equations over the reals. This algorithm, introduced in 1998 by Bliek et al., handles a system of equations previously decomposed into a set of (small) k × k sub-systems, called blocks. Partial solutions are computed in the different blocks and combined together to obtain the set of global solutions. When solutions inside blocks are computed with interval-based techniques, IBB can be viewed as a new interval-based algorithm for solving decomposed equation systems. Previous implementations used Ilog Solver and its IlcInterval library. The fact that this interval-based solver was more or less a black box implied several strong limitations. The new results described in this paper come from the integration of IBB with the interval-based library developed by the second author. This new library allows IBB to become reliable (no solution is lost) while still gaining several orders of magnitude w.r.t. solving the whole system. We compare several variants of IBB on a sample of benchmarks, which allows us to better understand the behavior of IBB. The main conclusion is that the use of an interval Newton operator inside blocks has the most positive impact on the robustness and performance of IBB. This modifies the influence of other features, such as intelligent backtracking and filtering strategies

    A Review of Formal Methods applied to Machine Learning

    Full text link
    We review state-of-the-art formal methods applied to the emerging field of the verification of machine learning systems. Formal methods can provide rigorous correctness guarantees on hardware and software systems. Thanks to the availability of mature tools, their use is well established in the industry, and in particular to check safety-critical applications as they undergo a stringent certification process. As machine learning is becoming more popular, machine-learned components are now considered for inclusion in critical systems. This raises the question of their safety and their verification. Yet, established formal methods are limited to classic, i.e. non machine-learned software. Applying formal methods to verify systems that include machine learning has only been considered recently and poses novel challenges in soundness, precision, and scalability. We first recall established formal methods and their current use in an exemplar safety-critical field, avionic software, with a focus on abstract interpretation based techniques as they provide a high level of scalability. This provides a golden standard and sets high expectations for machine learning verification. We then provide a comprehensive and detailed review of the formal methods developed so far for machine learning, highlighting their strengths and limitations. The large majority of them verify trained neural networks and employ either SMT, optimization, or abstract interpretation techniques. We also discuss methods for support vector machines and decision tree ensembles, as well as methods targeting training and data preparation, which are critical but often neglected aspects of machine learning. Finally, we offer perspectives for future research directions towards the formal verification of machine learning systems
    • 

    corecore