Securing open multi-agent systems governed by electronic institutions

Abstract

One way to build large-scale autonomous systems is to develop an open multi-agent system using peer-to-peer architectures in which agents are not pre-engineered to work together and in which agents themselves determine the social norms that govern collective behaviour. The social norms and the agent interaction models can be described by Electronic Institutions such as those expressed in the Lightweight Coordination Calculus (LCC), a compact executable specification language based on logic programming and pi-calculus. Open multi-agent systems have experienced growing popularity in the multi-agent community and are expected to have many applications in the near future as large scale distributed systems become more widespread, e.g. in emergency response, electronic commerce and cloud computing. A major practical limitation to such systems is security, because the very openness of such systems opens the doors to adversaries for exploit existing vulnerabilities. This thesis addresses the security of open multi-agent systems governed by electronic institutions. First, the main forms of attack on open multi-agent systems are introduced and classified in the proposed attack taxonomy. Then, various security techniques from the literature are surveyed and analysed. These techniques are categorised as either prevention or detection approaches. Appropriate countermeasures to each class of attack are also suggested. A fundamental limitation of conventional security mechanisms (e.g. access control and encryption) is the inability to prevent information from being propagated. Focusing on information leakage in choreography systems using LCC, we then suggest two frameworks to detect insecure information flows: conceptual modeling of interaction models and language-based information flow analysis. A novel security-typed LCC language is proposed to address the latter approach. Both static (design-time) and dynamic (run-time) security type checking are employed to guarantee no information leakage can occur in annotated LCC interaction models. The proposed security type system is then formally evaluated by proving its properties. A limitation of both conceptual modeling and language-based frameworks is difficulty of formalising realistic policies using annotations. Finally, the proposed security-typed LCC is applied to a cloud computing configuration case study, in which virtual machine migration is managed. The secrecy of LCC interaction models for virtual machine management is analysed and information leaks are discussed

    Similar works