7 research outputs found

    A Theory of Formal Synthesis via Inductive Learning

    Full text link
    Formal synthesis is the process of generating a program satisfying a high-level formal specification. In recent times, effective formal synthesis methods have been proposed based on the use of inductive learning. We refer to this class of methods that learn programs from examples as formal inductive synthesis. In this paper, we present a theoretical framework for formal inductive synthesis. We discuss how formal inductive synthesis differs from traditional machine learning. We then describe oracle-guided inductive synthesis (OGIS), a framework that captures a family of synthesizers that operate by iteratively querying an oracle. An instance of OGIS that has had much practical impact is counterexample-guided inductive synthesis (CEGIS). We present a theoretical characterization of CEGIS for learning any program that computes a recursive language. In particular, we analyze the relative power of CEGIS variants where the types of counterexamples generated by the oracle varies. We also consider the impact of bounded versus unbounded memory available to the learning algorithm. In the special case where the universe of candidate programs is finite, we relate the speed of convergence to the notion of teaching dimension studied in machine learning theory. Altogether, the results of the paper take a first step towards a theoretical foundation for the emerging field of formal inductive synthesis

    Neuro Symbolic Reasoning for Planning: Counterexample Guided Inductive Synthesis using Large Language Models and Satisfiability Solving

    Full text link
    Generative large language models (LLMs) with instruct training such as GPT-4 can follow human-provided instruction prompts and generate human-like responses to these prompts. Apart from natural language responses, they have also been found to be effective at generating formal artifacts such as code, plans, and logical specifications from natural language prompts. Despite their remarkably improved accuracy, these models are still known to produce factually incorrect or contextually inappropriate results despite their syntactic coherence - a phenomenon often referred to as hallucination. This limitation makes it difficult to use these models to synthesize formal artifacts that are used in safety-critical applications. Unlike tasks such as text summarization and question-answering, bugs in code, plan, and other formal artifacts produced by LLMs can be catastrophic. We posit that we can use the satisfiability modulo theory (SMT) solvers as deductive reasoning engines to analyze the generated solutions from the LLMs, produce counterexamples when the solutions are incorrect, and provide that feedback to the LLMs exploiting the dialog capability of instruct-trained LLMs. This interaction between inductive LLMs and deductive SMT solvers can iteratively steer the LLM to generate the correct response. In our experiments, we use planning over the domain of blocks as our synthesis task for evaluating our approach. We use GPT-4, GPT3.5 Turbo, Davinci, Curie, Babbage, and Ada as the LLMs and Z3 as the SMT solver. Our method allows the user to communicate the planning problem in natural language; even the formulation of queries to SMT solvers is automatically generated from natural language. Thus, the proposed technique can enable non-expert users to describe their problems in natural language, and the combination of LLMs and SMT solvers can produce provably correct solutions.Comment: 25 pages, 7 figure

    Techniques for automated parameter estimation in computational models of probabilistic systems

    Get PDF
    The main contribution of this dissertation is the design of two new algorithms for automatically synthesizing values of numerical parameters of computational models of complex stochastic systems such that the resultant model meets user-specified behavioral specifications. These algorithms are designed to operate on probabilistic systems – systems that, in general, behave differently under identical conditions. The algorithms work using an approach that combines formal verification and mathematical optimization to explore a model\u27s parameter space. The problem of determining whether a model instantiated with a given set of parameter values satisfies the desired specification is first defined using formal verification terminology, and then reformulated in terms of statistical hypothesis testing. Parameter space exploration involves determining the outcome of the hypothesis testing query for each parameter point and is guided using simulated annealing. The first algorithm uses the sequential probability ratio test (SPRT) to solve the hypothesis testing problems, whereas the second algorithm uses an approach based on Bayesian statistical model checking (BSMC). The SPRT-based parameter synthesis algorithm was used to validate that a given model of glucose-insulin metabolism has the capability of representing diabetic behavior by synthesizing values of three parameters that ensure that the glucose-insulin subsystem spends at least 20 minutes in a diabetic scenario. The BSMC-based algorithm was used to discover the values of parameters in a physiological model of the acute inflammatory response that guarantee a set of desired clinical outcomes. These two applications demonstrate how our algorithms use formal verification, statistical hypothesis testing and mathematical optimization to automatically synthesize parameters of complex probabilistic models in order to meet user-specified behavioral propertie

    On the Generation of Precise Fixed-Point Expressions

    Get PDF
    Several problems in the implementations of control systems, signal-processing systems, and scientific computing systems reduce to compiling a polynomial expression over the reals into an imperative program using fixed-point arithmetic. Fixed-point arithmetic only approximates real values, and its operators do not have the fundamental properties of real arithmetic, such as associativity. Consequently, a naive compilation process can yield a program that significantly deviates from the real polynomial, whereas a different order of evaluation can result in a program that is close to the real value on all inputs in its domain. We present a compilation scheme for real-valued arithmetic expressions to fixed-point arithmetic programs. Given a real-valued polynomial expression t, we find an expression t' that is equivalent to t over the reals, but whose implementation as a series of fixed-point operations minimizes the error between the fixed-point value and the value of t over the space of all inputs. We show that the corresponding decision problem, checking whether there is an implementation t' of t whose error is less than a given constant, is NP-hard. We then propose a solution technique based on genetic programming. Our technique evaluates the fitness of each candidate program using a static analysis based on affine arithmetic. We show that our tool can significantly reduce the error in the fixed-point implementation on a set of linear control system benchmarks. For example, our tool found implementations whose errors are only one half of the errors in the original fixed-point expressions

    Heuristics for the refinement of assumptions in generalized reactivity formulae

    Get PDF
    Reactive synthesis is concerned with automatically generating implementations from formal specifications. These specifications are typically written in the language of generalized reactivity (GR(1)), a subset of linear temporal logic capable of expressing the most common industrial specification patterns, and describe the requirements about the behavior of a system under assumptions about the environment where the system is to be deployed. Oftentimes no implementation exists which guarantees the required behavior under all possible environments, typically due to missing assumptions (this is usually referred to as unrealizability). To address this issue, new assumptions need to be added to complete the specification, a problem known as assumptions refinement. Since the space of candidate assumptions is intractably large, searching for the best solutions is inherently hard. In particular, new methods are needed to (i) increase the effectiveness of the search procedures, measured as the ratio between the number of solutions found and of refinements explored; and (ii) improve the results' quality, defined as the weakness of the solutions. In this thesis we propose a set of heuristics to meet these goals, and a methodology to assess and compare assumptions refinement methods based on quantitative metrics. The heuristics are in the form of algorithms to generate candidate refinements during the search, and quantitative measures to assess the quality of the candidates. We first discuss a heuristic method to generate assumptions that target the cause of unrealizability. This is done by selecting candidate refinement formulas based on Craig's interpolation. We provide a formal underpinning of the technique and evaluate it in terms of our new metric of effectiveness, as defined above, whose value is improved with respect to the state of the art. We demonstrate this on a set of popular benchmarks of embedded software. We then provide a formal, quantitative characterization of the permissiveness of environment assumptions in the form of a weakness measure. We prove that the partial order induced by this measure is consistent with the one induced by implication. The key advantage of this measure is that it allows for prioritizing candidate solutions, as we show experimentally. Lastly, we propose a notion of minimal refinements with respect to the observed counterstrategies. We demonstrate that exploring minimal refinements produces weaker solutions, and reduces the amount of computations needed to explore each refinement. However, this may come at the cost of reducing the effectiveness of the search. To counteract this effect, we propose a hybrid search approach in which both minimal and non-minimal refinements are explored.Open Acces
    corecore