452 research outputs found
Constraint-Based Qualitative Simulation
We consider qualitative simulation involving a finite set of qualitative
relations in presence of complete knowledge about their interrelationship. We
show how it can be naturally captured by means of constraints expressed in
temporal logic and constraint satisfaction problems. The constraints relate at
each stage the 'past' of a simulation with its 'future'. The benefit of this
approach is that it readily leads to an implementation based on constraint
technology that can be used to generate simulations and to answer queries about
them.Comment: 10 pages, to appear at the conference TIME 200
Extending classical planning with state constraints: Heuristics and search for optimal planning
We present a principled way of extending a classical AI planning formalism with systems of state constraints, which relate - sometimes determine - the values of variables in each state traversed by the plan. This extension occupies an attractive middle ground between expressivity and complexity. It enables modelling a new range of problems, as well as formulating more efficient models of classical planning problems. An example of the former is planning-based control of networked physical systems - power networks, for example - in which a local, discrete control action can have global effects on continuous quantities, such as altering flows across the entire network. At the same time, our extension remains decidable as long as the satisfiability of sets of state constraints is decidable, including in the presence of numeric state variables, and we demonstrate that effective techniques for cost-optimal planning known in the classical setting - in particular, relaxation-based admissible heuristics - can be adapted to the extended formalism. In this paper, we apply our approach to constraints in the form of linear or non-linear equations over numeric state variables, but the approach is independent of the type of state constraints, as long as there exists a procedure that decides their consistency. The planner and the constraint solver interact through a well-defined, narrow interface, in which the solver requires no specialisation to the planning contextThis work was supported by ARC project DP140104219, “Robust AI Planning for Hybrid Systems”, and in part by ARO grant W911NF1210471 and ONR grant N000141210430
Optimal Planning with State Constraints
In the classical planning model, state variables are assigned
values in the initial state and remain unchanged unless
explicitly affected by action effects. However, some properties
of states are more naturally modelled not as direct effects of
actions but instead as derived, in each state, from the primary
variables via a set of rules. We refer to those rules as state
constraints. The two types of state constraints that will be
discussed here are numeric state constraints and logical rules
that we will refer to as axioms.
When using state constraints we make a distinction between
primary variables, whose values are directly affected by action
effects, and secondary variables, whose values are determined by
state constraints. While primary variables have finite and
discrete domains, as in classical planning, there is no such
requirement for secondary variables. For example, using numeric
state constraints allows us to have secondary variables whose
values are real numbers. We show that state constraints are a
construct that lets us combine classical planning methods with
specialised solvers developed for other types of problems. For
example, introducing numeric state constraints enables us to
apply planning techniques in domains involving interconnected
physical systems, such as power networks.
To solve these types of problems optimally, we adapt commonly
used methods from optimal classical planning, namely state-space
search guided by admissible heuristics. In heuristics based on
monotonic relaxation, the idea is that in a relaxed state each
variable assumes a set of values instead of just a single value.
With state constraints, the challenge becomes to evaluate the
conditions, such as goals and action preconditions, that involve
secondary variables. We employ consistency checking tools to
evaluate whether these conditions are satisfied in the relaxed
state. In our work with numerical constraints we use linear
programming, while with axioms we use answer set programming and
three value semantics. This allows us to build a relaxed planning
graph and compute constraint-aware version of heuristics based on
monotonic relaxation.
We also adapt pattern database heuristics. We notice that an
abstract state can be thought of as a state in the monotonic
relaxation in which the variables in the pattern hold only one
value, while the variables not in the pattern simultaneously hold
all the values in their domains. This means that we can apply the
same technique for evaluating conditions on secondary variables
as we did for the monotonic relaxation and build pattern
databases similarly as it is done in classical planning.
To make better use of our heuristics, we modify the A* algorithm
by combining two techniques that were previously used
independently – partial expansion and preferred operators. Our
modified algorithm, which we call PrefPEA, is most beneficial in
cases where heuristic is expensive to compute, but accurate, and
states have many successors
Tools and Experiments for Software Security
The computer security problems that we face begin in computer programs that we write.
The exploitation of vulnerabilities that leads to the theft of private
information and other nefarious activities often begins with a vulnerability
accidentally created in a computer program by that program's author. What
are the factors that lead to the creation of these vulnerabilities? Software
development and programming is in part a synthetic activity that we can
control with technology, i.e. different programming languages and software
development tools. Does changing the technology used to program software help
programmers write more secure code? Can we create technology that will
help programmers make fewer mistakes?
This dissertation examines these questions. We start with the Build It
Break It Fix It project, a security focused programming competition. This project
provides data on software security problems by allowing contestants
to write security focused software in any programming language. We discover that
using C leads to memory safety issues that can compromise security.
Next, we consider making C safer. We develop and examine the Checked C
programming language, a strict super-set of C that adds types for spatial safety.
We also introduce an automatic re-writing tool that can convert C code into Checked C
code. We evaluate the approach overall on benchmarks used by prior work on making
C safer.
We then consider static analysis. After an examination of different parameters of
numeric static analyzers, we develop a disjunctive abstract domain that uses a
novel merge heuristic, a notion of volumetric difference, either approximated via
MCMC sampling or precisely computed via conical decomposition. This domain is
implemented in a static analyzer for C programs and evaluated.
After static analysis, we consider fuzzing. We consider what it takes to perform
a good evaluation of a fuzzing technique with our own experiments and a review of
recent fuzzing papers. We develop a checklist for conducting new fuzzing research
and a general strategy for identifying root causes of failure found during fuzzing.
We evaluate new root cause analysis approaches using coverage information as
inputs to statistical clustering algorithms
The 2011 International Planning Competition
After a 3 years gap, the 2011 edition of the IPC involved a total of 55 planners,
some of them versions of the same planner, distributed among four tracks: the sequential
satisficing track (27 planners submitted out of 38 registered), the sequential multicore
track (8 planners submitted out of 12 registered), the sequential optimal track (12
planners submitted out of 24 registered) and the temporal satisficing track (8 planners
submitted out of 14 registered). Three more tracks were open to participation: temporal
optimal, preferences satisficing and preferences optimal. Unfortunately the number of submitted planners did not allow these tracks to be finally included in the competition.
A total of 55 people were participating, grouped in 31 teams. Participants came
from Australia, Canada, China, France, Germany, India, Israel, Italy, Spain, UK and
USA.
For the sequential tracks 14 domains, with 20 problems each, were selected, while
the temporal one had 12 domains, also with 20 problems each. Both new and past
domains were included. As in previous competitions, domains and problems were
unknown for participants and all the experimentation was carried out by the organizers.
To run the competition a cluster of eleven 64-bits computers (Intel XEON 2.93 Ghz
Quad core processor) using Linux was set up. Up to 1800 seconds, 6 GB of RAM memory and 750 GB of hard disk were available for each planner to solve a problem. This resulted in 7540 computing hours (about 315 days), plus a high number of hours devoted to preliminary experimentation with new domains, reruns and bugs fixing.
The detailed results of the competition, the software used for automating most
tasks, the source code of all the participating planners and the description of domains and problems can be found at the competition’s web page:
http://www.plg.inf.uc3m.es/ipc2011-deterministicThis booklet summarizes the participants on the Deterministic Track of the International
Planning Competition (IPC) 2011. Papers describing all the participating planners
are included
Invariant Generation through Strategy Iteration in Succinctly Represented Control Flow Graphs
We consider the problem of computing numerical invariants of programs, for
instance bounds on the values of numerical program variables. More
specifically, we study the problem of performing static analysis by abstract
interpretation using template linear constraint domains. Such invariants can be
obtained by Kleene iterations that are, in order to guarantee termination,
accelerated by widening operators. In many cases, however, applying this form
of extrapolation leads to invariants that are weaker than the strongest
inductive invariant that can be expressed within the abstract domain in use.
Another well-known source of imprecision of traditional abstract interpretation
techniques stems from their use of join operators at merge nodes in the control
flow graph. The mentioned weaknesses may prevent these methods from proving
safety properties. The technique we develop in this article addresses both of
these issues: contrary to Kleene iterations accelerated by widening operators,
it is guaranteed to yield the strongest inductive invariant that can be
expressed within the template linear constraint domain in use. It also eschews
join operators by distinguishing all paths of loop-free code segments. Formally
speaking, our technique computes the least fixpoint within a given template
linear constraint domain of a transition relation that is succinctly expressed
as an existentially quantified linear real arithmetic formula. In contrast to
previously published techniques that rely on quantifier elimination, our
algorithm is proved to have optimal complexity: we prove that the decision
problem associated with our fixpoint problem is in the second level of the
polynomial-time hierarchy.Comment: 35 pages, conference version published at ESOP 2011, this version is
a CoRR version of our submission to Logical Methods in Computer Scienc
An integrated approach to high integrity software verification.
Computer software is developed through software engineering. At its most precise, software
engineering involves mathematical rigour as formal methods. High integrity software
is associated with safety critical and security critical applications, where failure
would bring significant costs. The development of high integrity software is subject to
stringent standards, prescribing best practises to increase quality. Typically, these standards
will strongly encourage or enforce the application of formal methods.
The application of formal methods can entail a significant amount of mathematical
reasoning. Thus, the development of automated techniques is an active area of research.
The trend is to deliver increased automation through two complementary approaches.
Firstly, lightweight formal methods are adopted, sacrificing expressive power, breadth of
coverage, or both in favour of tractability. Secondly, integrated solutions are sought,
exploiting the strengths of different technologies to increase automation.
The objective of this thesis is to support the production of high integrity software by
automating an aspect of formal methods. To develop tractable techniques we focus on
the niche activity of verifying exception freedom. To increase effectiveness, we integrate
the complementary technologies of proof planning and program analysis. Our approach
is investigated by enhancing the SPARK Approach, as developed by Altran Praxis Limited.
Our approach is implemented and evaluated as the SPADEase system. The key
contributions of the thesis are summarised below:
• Configurable and Sound - Present a configurable and justifiably sound approach
to software verification.
• Cooperative Integration - Demonstrate that more targeted and effective automation
can be achieved through the cooperative integration of distinct technologies.
• Proof Discovery - Present proof plans that support the verification of exception
freedom.
• Invariant Discovery - Present invariant discovery heuristics that support the verification
of exception freedom.
• Implementation as SPADEase - Implement our approach as SPADEase.
• Industrial Evaluation - Evaluate SPADEase against both textbook and industrial
subprograms
Design and implementation of a Multi-Agent Planning System
This work introduces the design and implementation of a Multi-Agent Planning framework, in which a set of agents work jointly in order to devise a course of action to solve a certain planning problem.Torreño Lerma, A. (2011). Design and implementation of a Multi-Agent Planning System. http://hdl.handle.net/10251/15358Archivo delegad
Deductive Evaluation: Formal Code Analysis With Low User Burden
We describe a framework for symbolically evaluating iterative C code using a deductive approach that automatically discovers and proves program properties. Although verification is not performed, the method can infer detailed program behavior. Software engineering work flows could be enhanced by this type of analysis. Floyd-Hoare verification principles are applied to synthesize loop invariants, using a library of iteration-specific deductive knowledge. When needed, theorem proving is interleaved with evaluation and performed on the fly. Evaluation results take the form of inferred expressions and type constraints for values of program variables. An implementation using PVS (Prototype Verification System) is presented along with results for sample C functions
An array content static analysis based on non-contiguous partitions
International audienceConventional array partitioning analyses split arrays into contiguous partitions to infer properties of sets of cells. Such analyses cannot group together non contiguous cells, even when they have similar properties. In this paper, we propose an abstract domain which utilizes semantic properties to split array cells into groups. Cells with similar properties will be packed into groups and abstracted together. Additionally, groups are not necessarily contiguous. This abstract domain allows to infer complex array invariants in a fully automatic way. Experiments on examples from the Minix 1.1 memory management and a tiny industrial operating system demonstrate the effectiveness of the analysis
- …