4,378 research outputs found
Scalable Synthesis and Verification: Towards Reliable Autonomy
We have seen the growing deployment of autonomous systems in our daily life, ranging from safety-critical self-driving cars to dialogue agents. While impactful and impressive, these systems do not often come with guarantees and are not rigorously evaluated for failure cases. This is in part due to the limited scalability of tools available for designing correct-by-construction systems, or verifying them posthoc. Another key limitation is the lack of availability of models for the complex environments with which autonomous systems often have to interact with. In the direction of overcoming these above mentioned bottlenecks to designing reliable autonomous systems, this thesis makes contributions along three fronts.
First, we develop an approach for parallelized synthesis from linear-time temporal logic Specifications corresponding to the generalized reactivity (1) fragment. We begin by identifying a special case corresponding to singleton liveness goals that allows for a decomposition of the synthesis problem, which facilitates parallelized synthesis. Based on the intuition from this special case, we propose a more generalized approach for parallelized synthesis that relies on identifying equicontrollable states.
Second, we consider learning-based approaches to enable verification at scale for complex systems, and for autonomous systems that interact with black-box environments. For the former, we propose a new abstraction refinement procedure based on machine learning to improve the performance of nonlinear constraint solving algorithms on large-scale problems. For the latter, we present a data-driven approach based on chance-constrained optimization that allows for a system to be evaluated for specification conformance without an accurate model of the environment. We demonstrate this approach on several tasks, including a lane-change scenario with real-world driving data.
Lastly, we consider the problem of interpreting and verifying learning-based components such as neural networks. We introduce a new method based on Craig's interpolants for computing compact symbolic abstractions of pre-images for neural networks. Our approach relies on iteratively computing approximations that provably overapproximate and underapproximate the pre-images at all layers. Further, building on existing work for training neural networks for verifiability in the classification setting, we propose extensions that allow us to generalize the approach to more general architectures and temporal specifications.</p
The 1990 progress report and future plans
This document describes the progress and plans of the Artificial Intelligence Research Branch (RIA) at ARC in 1990. Activities span a range from basic scientific research to engineering development and to fielded NASA applications, particularly those applications that are enabled by basic research carried out at RIA. Work is conducted in-house and through collaborative partners in academia and industry. Our major focus is on a limited number of research themes with a dual commitment to technical excellence and proven applicability to NASA short, medium, and long-term problems. RIA acts as the Agency's lead organization for research aspects of artificial intelligence, working closely with a second research laboratory at JPL and AI applications groups at all NASA centers
Faster Constraint Solving Using Learning Based Abstractions
This work addresses the problem of scalable constraint solving. Our
technique combines traditional constraint-solving approaches with
machine learning techniques to propose abstractions that simplify the
problem. First, we use a collection of heuristics to learn sets of constraints
that may be well abstracted as a single, simpler constraint. Next, we
use an asymmetric machine learning procedure to abstract the set of clauses, using
satisfying and falsifying instances as training data. Next, we solve a
reduced constraint problem to check that the learned formula is indeed a
consequent (or antecedent) of the formula we sought to abstract, and
finally we use the learned formula to check the original property.
Our experiments show that our technique allows improved handling of
constraint solving instances that are slow to complete on a conventional
solver. Our technique is complementary to existing constraint solving
approaches, in the sense that it can be used to improve the scalability
of any existing tool
Sciduction: Combining Induction, Deduction, and Structure for Verification and Synthesis
Even with impressive advances in automated formal methods, certain problems
in system verification and synthesis remain challenging. Examples include the
verification of quantitative properties of software involving constraints on
timing and energy consumption, and the automatic synthesis of systems from
specifications. The major challenges include environment modeling,
incompleteness in specifications, and the complexity of underlying decision
problems.
This position paper proposes sciduction, an approach to tackle these
challenges by integrating inductive inference, deductive reasoning, and
structure hypotheses. Deductive reasoning, which leads from general rules or
concepts to conclusions about specific problem instances, includes techniques
such as logical inference and constraint solving. Inductive inference, which
generalizes from specific instances to yield a concept, includes algorithmic
learning from examples. Structure hypotheses are used to define the class of
artifacts, such as invariants or program fragments, generated during
verification or synthesis. Sciduction constrains inductive and deductive
reasoning using structure hypotheses, and actively combines inductive and
deductive reasoning: for instance, deductive techniques generate examples for
learning, and inductive reasoning is used to guide the deductive engines.
We illustrate this approach with three applications: (i) timing analysis of
software; (ii) synthesis of loop-free programs, and (iii) controller synthesis
for hybrid systems. Some future applications are also discussed
PowerModels.jl: An Open-Source Framework for Exploring Power Flow Formulations
In recent years, the power system research community has seen an explosion of
novel methods for formulating and solving power network optimization problems.
These emerging methods range from new power flow approximations, which go
beyond the traditional DC power flow by capturing reactive power, to convex
relaxations, which provide solution quality and runtime performance guarantees.
Unfortunately, the sophistication of these emerging methods often presents a
significant barrier to evaluating them on a wide variety of power system
optimization applications. To address this issue, this work proposes
PowerModels, an open-source platform for comparing power flow formulations.
From its inception, PowerModels was designed to streamline the process of
evaluating different power flow formulations on shared optimization problem
specifications. This work provides a brief introduction to the design of
PowerModels, validates its implementation, and demonstrates its effectiveness
with a proof-of-concept study analyzing five different formulations of the
Optimal Power Flow problem
- …