24 research outputs found
Verification of temporal-epistemic properties of access control systems
Verification of access control systems against vulnerabilities has always been a challenging problem in the world of computer security. The complication of security policies in large- scale multi-agent systems increases the possible existence of vulnerabilities as a result of mistakes in policy definition. This thesis explores automated methods in order to verify temporal and epistemic properties of access control systems. While temporal property verification can reveal a considerable number of security holes, verification of epistemic properties in multi-agent systems enable us to infer about agents' knowledge in the system and hence, to detect unauthorized information flow. This thesis first presents a framework for knowledge-based verification of dynamic access control policies. This framework models a coalition-based system, which evaluates if a property or a goal can be achieved by a coalition of agents restricted by a set of permissions defined in the policy. Knowledge is restricted to the information that agents can acquire by reading system information in order to increase time and memory efficiency. The framework has its own model-checking method and is implemented in Java and released as an open source tool named \char{cmmi10}{0x50}\char{cmmi10}{0x6f}\char{cmmi10}{0x6c}\char{cmmi10}{0x69}\char{cmmi10}{0x56}\char{cmmi10}{0x65}\char{cmmi10}{0x72}. In order to detect information leakage as a result of reasoning, the second part of this thesis presents a complimentary technique that evaluates access control policies over temporal-epistemic properties where the knowledge is gained by reasoning. We will demonstrate several case studies for a subset of properties that deal with reasoning about knowledge. To increase the efficiency, we develop an automated abstraction refinement technique for evaluating temporal-epistemic properties. For the last part of the thesis, we develop a sound and complete algorithm in order to identify information leakage in Datalog-based trust management systems
Automatic Verification of Communicative Commitments using Reduction
In spite of the fact that modeling and verification of the Multi-Agent Systems (MASs) have been since long under study, there are several related challenges that should still be addressed. In effect, several frameworks have been established for modeling and verifying the MASs with regard to communicative commitments. A bulky volume of research has been conducted for defining semantics of these systems. Though, formal verification of these systems is still unresolved research problem. Within this context, this paper presents the CTLcom that reforms the CTLC, i.e., the temporal logic of the commitments, so as to enable reasoning about the commitments and fulfillment. Moreover, the paper introduces a fully-automated method for verification of the logic by means of trimming down the problem of a model that checks the CTLcom to a problem of a model that checks the GCTL*, which is a generalized version of the CTL* with action formulae. By so doing, we take advantage of the CWB-NC automata-based model checker as a tool for verification. Lastly, this paper presents a case study drawn from the business field, that is, the NetBill protocol, illustrates its implementation, and discusses the associated experimental results in order to illustrate the efficiency and effectiveness of the suggested technique. Keywords: Multi-Agent Systems, Model Checking, Communicative commitment's, Reduction
Minimal Proof Search for Modal Logic K Model Checking
Most modal logics such as S5, LTL, or ATL are extensions of Modal Logic K.
While the model checking problems for LTL and to a lesser extent ATL have been
very active research areas for the past decades, the model checking problem for
the more basic Multi-agent Modal Logic K (MMLK) has important applications as a
formal framework for perfect information multi-player games on its own.
We present Minimal Proof Search (MPS), an effort number based algorithm
solving the model checking problem for MMLK. We prove two important properties
for MPS beyond its correctness. The (dis)proof exhibited by MPS is of minimal
cost for a general definition of cost, and MPS is an optimal algorithm for
finding (dis)proofs of minimal cost. Optimality means that any comparable
algorithm either needs to explore a bigger or equal state space than MPS, or is
not guaranteed to find a (dis)proof of minimal cost on every input.
As such, our work relates to A* and AO* in heuristic search, to Proof Number
Search and DFPN+ in two-player games, and to counterexample minimization in
software model checking.Comment: Extended version of the JELIA 2012 paper with the same titl
Model checking and compositional reasoning for multi-agent systems
Multi-agent systems are distributed systems containing interacting autonomous agents designed to achieve shared and private goals. For safety-critical systems where we wish to replace a human role with an autonomous entity, we need to make assurances about the correctness of the autonomous delegate. Specialised techniques have been proposed recently for the verification of agents against mentalistic logics. Problematically, these approaches treat the system in a monolithic way. When verifying a property against a single agent, the approaches examine all behaviours of every component in the system. This is both inefficient and can lead to intractability: the so-called state-space explosion problem. In this thesis, we consider techniques to support the verification of agents in isolation. We avoid the state-space explosion problem by verifying an individual agent in the context of a specification of the rest of the system, rather than the system itself. We show that it is possible to verify an agent against its desired properties without needing to consider the behaviours of the remaining components. We first introduce a novel approach for verifying a system as a whole against specifications expressed in a logic of time and knowledge. The technique, based on automata over trees, supports an efficient procedure to verify systems in an automata-theoretic way using language containment. We show how the automata-theoretic approach can be used as an underpinning for assume-guarantee reasoning for multi-agent systems. We use a temporal logic of actions to specify the expected behaviour of the other components in the system. When performing modular verification, this specification is used to exclude behaviours that are inconsistent with the concrete system. We implement both approaches within the open-source model checker MCMAS and show that, for the relevant properties, the assume-guarantee approach can significantly increase the tractability of individual agent verification.Open Acces
Model Checking Trust-based Multi-Agent Systems
Trust has been the focus of many research projects, both theoretical and practical, in
the recent years, particularly in domains where open multi-agent technologies are applied
(e.g., Internet-based markets, Information retrieval, etc.). The importance of trust in such
domains arises mainly because it provides a social control that regulates the relationships
and interactions among agents. Despite the growing number of various multi-agent applications, they still encounter many challenges in their formal modeling and the verification
of agents’ behaviors. Many formalisms and approaches that facilitate the specifications of
trust in Multi-Agent Systems (MASs) can be found in the literature. However, most of these
approaches focus on the cognitive side of trust where the trusting entity is normally capable
of exhibiting properties about beliefs, desires, and intentions. Hence, the trust is considered
as a belief of an agent (the truster) involving ability and willingness of the trustee to perform some actions for the truster. Nevertheless, in open MASs, entities can join and leave
the interactions at any time. This means MASs will actually provide no guarantee about the
behavior of their agents, which makes the capability of reasoning about trust and checking
the existence of untrusted computations highly desired.
This thesis aims to address the problem of modeling and verifying at design time
trust in MASs by (1) considering a cognitive-independent view of trust where trust ingredients are seen from a non-epistemic angle, (2) introducing a logical language named Trust
Computation Tree Logic (TCTL), which extends CTL with preconditional, conditional, and graded trust operators along with a set of reasoning postulates in order to explore its capabilities, (3) proposing a new accessibility relation which is needed to define the semantics
of the trust modal operators. This accessibility relation is defined so that it captures the
intuition of trust while being easily computable, (4) investigating the most intuitive and
efficient algorithm for computing the trust set by developing, implementing, and experimenting different model checking techniques in order to compare between them in terms of
memory consumption, efficiency, and scalability with regard to the number of considered
agents, (5) evaluating the performance of the model checking techniques by analyzing the
time and space complexity.
The approach has been applied to different application domains to evaluate its computational performance and scalability. The obtained results reveal the effectiveness of the
proposed approach, making it a promising methodology in practice
An integration framework for managing rich organisational process knowledge
The problem we have addressed in this dissertation is that of designing a pragmatic
framework for integrating the synthesis and management of organisational process
knowledge which is based on domain-independent AI planning and plan representations. Our solution has focused on a set of framework components which provide
methods, tools and representations to accomplish this task.In the framework we address a lifecycle of this knowledge which begins with a
methodological approach to acquiring information about the process domain. We show
that this initial domain specification can be translated into a common constraint-based
model of activity (based on the work of Tate, 1996c and 1996d) which can then be
operationalised for use in an AI planner. This model of activity is ontologically underpinned and may be expressed with a flexible and extensible language based on a
sorted first-order logic. The model combines perspectives covering both the space of
behaviour as well as the space of decisions. Synthesised or modified processes/plans can
be translated to and from the common representation in order to support knowledge
sharing, visualisation and mixed-initiative interaction.This work united past and present Edinburgh research on planning and infused it
with perspectives from design rationale, requirements engineering, and process knowledge sharing. The implementation has been applied to a portfolio of scenarios which
include process examples from business, manufacturing, construction and military operations. An archive of this work is available at: http://www.aiai.ed.ac.uk/~oplan/cpf
Model checking multi-agent systems
A multi-agent system (MAS) is usually understood as a system composed of interacting
autonomous agents. In this sense, MAS have been employed successfully as a modelling
paradigm in a number of scenarios, especially in Computer Science. However, the process
of modelling complex and heterogeneous systems is intrinsically prone to errors: for this
reason, computer scientists are typically concerned with the issue of verifying that a system
actually behaves as it is supposed to, especially when a system is complex.
Techniques have been developed to perform this task: testing is the most common technique,
but in many circumstances a formal proof of correctness is needed. Techniques
for formal verification include theorem proving and model checking. Model checking
techniques, in particular, have been successfully employed in the formal verification of
distributed systems, including hardware components, communication protocols, security
protocols.
In contrast to traditional distributed systems, formal verification techniques for MAS are
still in their infancy, due to the more complex nature of agents, their autonomy, and
the richer language used in the specification of properties. This thesis aims at making
a contribution in the formal verification of properties of MAS via model checking. In
particular, the following points are addressed:
• Theoretical results about model checking methodologies for MAS, obtained by
extending traditional methodologies based on Ordered Binary Decision Diagrams (OBDDS) for temporal logics to multi-modal logics for time, knowledge, correct behaviour, and strategies of agents. Complexity results for model checking these logics
(and their symbolic representations).
• Development of a software tool (MCMAS) that permits the specification and verification
of MAS described in the formalism of interpreted systems.
• Examples of application of MCMAS to various MAS scenarios (communication, anonymity, games, hardware diagnosability), including experimental results, and comparison with other tools available
Modeling and Verifying Probabilistic Social Commitments in Multi-Agent Systems
Interaction among autonomous agents in Multi-Agent Systems (MASs) is the key aspect for solving complex problems that an individual agent cannot handle alone. In this context, social approaches, as opposed to the mental approaches, have recently received a considerable attention in the area of agent communication. They exploit observable social
commitments to develop a verifiable formal semantics by which communication protocols can be specified. However, existing approaches for defining social commitments tend to
assume an absolute guarantee of correctness so that systems run in a certain manner. That is, social commitments have always been modeled with the assumption of certainty. Moreover, the widespread use of MASs increases the interest to explore the interactions between different aspects of the participating agents such as the interaction between agents’ knowledge and social commitments in the presence of uncertainty. This results in having a gap, in the literature of agent communication, on modeling and verifying social commitments in probabilistic settings.
In this thesis, we aim to address the above-mentioned problems by presenting a practical formal framework that is capable of handling the problem of uncertainty in social
commitments. First, we develop an approach for representing, reasoning about, and verifying
probabilistic social commitments in MASs. This includes defining a new logic called the probabilistic logic of commitments (PCTLC), and a reduction-based model checking
procedure for verifying the proposed logic. In the reduction technique, the problem of model checking PCTLC is transformed into the problem of model checking PCTL so that
the use of the PRISM (Probabilistic Symbolic Model Checker) is made possible. Formulae of PCTLC are interpreted over an extended version of the probabilistic interpreted systems
formalism. Second, we extend the work we proposed for probabilistic social commitments to be able to capture and verify the interactions between knowledge and commitments.
Properties representing the interactions between the two aspects are expressed in a new developed logic called the probabilistic logic of knowledge and commitment (PCTLkc).
Third, we develop an adequate semantics for the group social commitments, for the first time in the literature, and integrate it into the framework. We then introduce an improved version of PCTLkc and extend it with operators for the group knowledge and group social commitments. The new refined logic is called PCTLkc+. In each of the latter stages, we respectively develop a new version of the probabilistic interpreted systems over which the
presented logic is interpreted, and introduce a new reduction-based verification technique to verify the proposed logic. To evaluate our proposed work, we implement the proposed verification techniques on top of the PRISM model checker and apply them on several case studies. The results demonstrate the usefulness and effectiveness of our proposed work
Fourth Conference on Artificial Intelligence for Space Applications
Proceedings of a conference held in Huntsville, Alabama, on November 15-16, 1988. The Fourth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: space applications of expert systems in fault diagnostics, in telemetry monitoring and data collection, in design and systems integration; and in planning and scheduling; knowledge representation, capture, verification, and management; robotics and vision; adaptive learning; and automatic programming
Reducing model checking commitments for agent communication to model checking ARCTL and GCTL*
Social commitments have been extensively and effectively used to represent and model business contracts among autonomous agents having competing objectives in a variety of areas (e.g., modeling business processes and commitment-based protocols). However, the formal verification of social commitments and their fulfillment is still an active research topic. This paper presents CTLC+ that modifies CTLC, a temporal logic of commitments for agent communication that extends computation tree logic (CTL) logic to allow reasoning about communicating commitments and their fulfillment. The verification technique is based on reducing the problem of model checking CTLC+ into the problem of model checking ARCTL (the combination of CTL with action formulae) and the problem of model checking GCTL* (a generalized version of CTL* with action formulae) in order to respectively use the extended NuSMV symbolic model checker and the CWB-NC automata-based model checker as a benchmark. We also prove that the reduction techniques are sound and the complexity of model checking CTLC+ for concurrent programs with respect to the size of the components of these programs and the length of the formula is PSPACE-complete. This matches the complexity of model checking CTL for concurrent programs as shown by Kupferman et al. We finally provide two case studies taken from business domain along with their respective implementations and experimental results to illustrate the effectiveness and efficiency of the proposed technique. The first one is about the NetBill protocol and the second one considers the Contract Net protocol