245 research outputs found
Verification of Agent-Based Artifact Systems
Artifact systems are a novel paradigm for specifying and implementing
business processes described in terms of interacting modules called artifacts.
Artifacts consist of data and lifecycles, accounting respectively for the
relational structure of the artifacts' states and their possible evolutions
over time. In this paper we put forward artifact-centric multi-agent systems, a
novel formalisation of artifact systems in the context of multi-agent systems
operating on them. Differently from the usual process-based models of services,
the semantics we give explicitly accounts for the data structures on which
artifact systems are defined. We study the model checking problem for
artifact-centric multi-agent systems against specifications written in a
quantified version of temporal-epistemic logic expressing the knowledge of the
agents in the exchange. We begin by noting that the problem is undecidable in
general. We then identify two noteworthy restrictions, one syntactical and one
semantical, that enable us to find bisimilar finite abstractions and therefore
reduce the model checking problem to the instance on finite models. Under these
assumptions we show that the model checking problem for these systems is
EXPSPACE-complete. We then introduce artifact-centric programs, compact and
declarative representations of the programs governing both the artifact system
and the agents. We show that, while these in principle generate infinite-state
systems, under natural conditions their verification problem can be solved on
finite abstractions that can be effectively computed from the programs. Finally
we exemplify the theoretical results of the paper through a mainstream
procurement scenario from the artifact systems literature
MCMAS-SLK: A Model Checker for the Verification of Strategy Logic Specifications
We introduce MCMAS-SLK, a BDD-based model checker for the verification of
systems against specifications expressed in a novel, epistemic variant of
strategy logic. We give syntax and semantics of the specification language and
introduce a labelling algorithm for epistemic and strategy logic modalities. We
provide details of the checker which can also be used for synthesising agents'
strategies so that a specification is satisfied by the system. We evaluate the
efficiency of the implementation by discussing the results obtained for the
dining cryptographers protocol and a variant of the cake-cutting problem
Finite Abstractions for the Verification of Epistemic Properties in Open Multi-Agent Systems
We develop a methodology to model and verify open multi-agent systems (OMAS), where agents may join in or leave at run time. Further, we specify properties of interest on OMAS in a variant of first-order temporal-epistemic logic, whose characteris-ing features include epistemic modalities indexed to individual terms, interpreted on agents appear-ing at a given state. This formalism notably allows to express group knowledge dynamically. We study the verification problem of these systems and show that, under specific conditions, finite bisimilar ab-stractions can be obtained
Tightening the Evaluation of PAC Bounds Using Formal Verification Results
Probably Approximately Correct (PAC) bounds are widely used to derive
probabilistic guarantees for the generalisation of machine learning models.
They highlight the components of the model which contribute to its
generalisation capacity. However, current state-of-the-art results are loose in
approximating the generalisation capacity of deployed machine learning models.
Consequently, while PAC bounds are theoretically useful, their applicability
for evaluating a model's generalisation property in a given operational design
domain is limited. The underlying classical theory is supported by the idea
that bounds can be tightened when the number of test points available to the
user to evaluate the model increases. Yet, in the case of neural networks, the
number of test points required to obtain bounds of interest is often
impractical even for small problems.
In this paper, we take the novel approach of using the formal verification of
neural systems to inform the evaluation of PAC bounds. Rather than using
pointwise information obtained from repeated tests, we use verification results
on regions around test points. We show that conditioning existing bounds on
verification results leads to a tightening proportional to the underlying
probability mass of the verified region.Comment: 10 page
Repairing misclassifications in neural networks using limited data
We present a novel and computationally efficient method for repairing a feed-forward neural network with respect to a finite set of inputs that are misclassified. The method assumes no access to the training set. We present a formal characterisation for repairing the neural network and study its resulting properties in terms of soundness and minimality. We introduce a gradient-based algorithm that performs localised modifications to the network's weights such that misclassifications are repaired while marginally affecting network accuracy on correctly classified inputs. We introduce an implementation, I-REPAIR, and show it is able to repair neural networks while reducing accuracy drops by up to 90% when compared to other state-of-the-art approaches for repair
- …
