86,585 research outputs found
Runtime Verification of Deontic and Trust Models in Multiagent Interactions
In distributed open systems, such as multiagent systems, new interactions are constantly
appearing and new agents are continuously joining or leaving. It is unrealistic
to expect agents to automatically trust new interactions. It is also unrealistic to expect
agents to refer to their users for help every time a new interaction is encountered.
An agent should decide for itself whether a specific interaction with a given group of
agents is suitable or not. This thesis presents a runtime verification mechanism for
addressing this problem.
Verifying multiagent systems has its challenges. It is hard to predict the reliability
of interactions, in systems that are heavily influenced by autonomous agents, without
having access to the agent specifications. Available verification mechanisms may
roughly be divided into two categories: (1) those that verify interaction models independently
of specific agents, and (2) those that verify agent models whose constraints
shape the interactions. Interaction models are not sufficient when verifying dynamic
properties that depend on the agents engaged in an interaction. On the other hand, verifying
agent specifications, such as BDI models, is extremely inefficient. Specifications
are usually not explicit enough, resulting in the verification of a massive number of permissible
interactions. Furthermore, in open systems, an agent’s internal specification
is usually not accessible for many reasons, including security and privacy.
This thesis proposes a model checker that verifies a combination of a global interaction
model and local deontic models. The deontic model may be viewed as a list of
agent constraints that are deemed necessary to share and verify, such as the inability
of the buyer to pay by credit card. The result is a lightweight, efficient, and powerful
model checker that is capable of verifying rich properties of multiagent systems
without the need for accessing agents’ internal specifications.
Although the proposed model checker has potential for addressing a variety of
problems, the trust domain receives special attention due to the critically of the trust
issue in distributed open systems and the lack of reliable trust solutions. The thesis
illustrates how a dynamic model checker, using deontic/trust models, can help agents
decide whether the scenarios they wish to join are trustworthy or not.
In summary, the main contribution of this research is in introducing interaction time
verification for checking deontic and trust models multiagent interactions. When faced
with new unexplored interactions, agents can verify whether joining a given interaction
with a given set of collaborating agents would violate any of its constraints
Proceedings of International Workshop "Global Computing: Programming Environments, Languages, Security and Analysis of Systems"
According to the IST/ FET proactive initiative on GLOBAL COMPUTING, the goal is to obtain techniques (models, frameworks, methods, algorithms) for constructing systems that are flexible, dependable, secure, robust and efficient.
The dominant concerns are not those of representing and manipulating data efficiently but rather those of handling the co-ordination and interaction, security, reliability, robustness, failure modes, and control of risk of the entities in the system and the overall design, description and performance of the system itself.
Completely different paradigms of computer science may have to be developed to tackle these issues effectively. The research should concentrate on systems having the following characteristics: • The systems are composed of autonomous computational entities where activity is not centrally controlled, either because global control is impossible or impractical, or because the entities are created or controlled by different owners.
• The computational entities are mobile, due to the movement of the physical platforms or by movement of the entity from one platform to another.
• The configuration varies over time. For instance, the system is open to the introduction of new computational entities and likewise their deletion.
The behaviour of the entities may vary over time.
• The systems operate with incomplete information about the environment.
For instance, information becomes rapidly out of date and mobility requires information about the environment to be discovered.
The ultimate goal of the research action is to provide a solid scientific foundation for the design of such systems, and to lay the groundwork for achieving effective principles for building and analysing such systems.
This workshop covers the aspects related to languages and programming environments as well as analysis of systems and resources involving 9 projects (AGILE , DART, DEGAS , MIKADO, MRG, MYTHS, PEPITO, PROFUNDIS, SECURE) out of the 13 founded under the initiative. After an year from the start of the projects, the goal of the workshop is to fix the state of the art on the topics covered by the two clusters related to programming environments and analysis of systems as well as to devise strategies and new ideas to profitably continue the research effort towards the overall objective of the initiative.
We acknowledge the Dipartimento di Informatica and Tlc of the University of Trento, the Comune di Rovereto, the project DEGAS for partially funding the event and the Events and Meetings Office of the University of Trento for the valuable collaboration
A Formal Framework for Modeling Trust and Reputation in Collective Adaptive Systems
Trust and reputation models for distributed, collaborative systems have been
studied and applied in several domains, in order to stimulate cooperation while
preventing selfish and malicious behaviors. Nonetheless, such models have
received less attention in the process of specifying and analyzing formally the
functionalities of the systems mentioned above. The objective of this paper is
to define a process algebraic framework for the modeling of systems that use
(i) trust and reputation to govern the interactions among nodes, and (ii)
communication models characterized by a high level of adaptiveness and
flexibility. Hence, we propose a formalism for verifying, through model
checking techniques, the robustness of these systems with respect to the
typical attacks conducted against webs of trust.Comment: In Proceedings FORECAST 2016, arXiv:1607.0200
A Vision of Collaborative Verification-Driven Engineering of Hybrid Systems
Abstract. Hybrid systems with both discrete and continuous dynamics are an important model for real-world physical systems. The key challenge is how to ensure their correct functioning w.r.t. safety requirements. Promising techniques to ensure safety seem to be model-driven engineering to develop hybrid systems in a well-defined and traceable manner, and formal verification to prove their correctness. Their combination forms the vision of verification-driven engineering. Despite the remarkable progress in automating formal verification of hybrid systems, the construction of proofs of complex systems often requires significant human guidance, since hybrid systems verification tools solve undecidable problems. It is thus not uncommon for verification teams to consist of many players with diverse expertise. This paper introduces a verification-driven engineering toolset that extends our previous work on hybrid and arithmetic verification with tools for (i) modeling hybrid systems, (ii) exchanging and comparing models and proofs, and (iii) managing verification tasks. This toolset makes it easier to tackle large-scale verification tasks.
- …