15 research outputs found
Proceedings of Monterey Workshop 2001 Engineering Automation for Sofware Intensive System Integration
The 2001 Monterey Workshop on Engineering Automation for Software Intensive System Integration was sponsored by the Office of Naval Research, Air Force Office of Scientific Research, Army Research Office and the Defense Advance Research Projects Agency. It is our pleasure to thank the workshop advisory and sponsors for their vision of a principled engineering solution for software and for their many-year tireless effort in supporting a series of workshops to bring everyone together.This workshop is the 8 in a series of International workshops. The workshop was held in Monterey Beach Hotel, Monterey, California during June 18-22, 2001. The general theme of the workshop has been to present and discuss research works that aims at increasing the practical impact of formal methods for software and systems engineering. The particular focus of this workshop was "Engineering Automation for Software Intensive System Integration". Previous workshops have been focused on issues including, "Real-time & Concurrent Systems", "Software Merging and Slicing", "Software Evolution", "Software Architecture", "Requirements Targeting Software" and "Modeling Software System Structures in a fastly moving scenario".Office of Naval ResearchAir Force Office of Scientific Research Army Research OfficeDefense Advanced Research Projects AgencyApproved for public release, distribution unlimite
Proceedings of the 1994 Monterey Workshop, Increasing the Practical Impact of Formal Methods for Computer-Aided Software Development: Evolution Control for Large Software Systems Techniques for Integrating Software Development Environments
Office of Naval Research, Advanced Research Projects Agency, Air Force Office of Scientific Research, Army Research Office, Naval Postgraduate School, National Science Foundatio
Runtime Verification of Deontic and Trust Models in Multiagent Interactions
In distributed open systems, such as multiagent systems, new interactions are constantly
appearing and new agents are continuously joining or leaving. It is unrealistic
to expect agents to automatically trust new interactions. It is also unrealistic to expect
agents to refer to their users for help every time a new interaction is encountered.
An agent should decide for itself whether a specific interaction with a given group of
agents is suitable or not. This thesis presents a runtime verification mechanism for
addressing this problem.
Verifying multiagent systems has its challenges. It is hard to predict the reliability
of interactions, in systems that are heavily influenced by autonomous agents, without
having access to the agent specifications. Available verification mechanisms may
roughly be divided into two categories: (1) those that verify interaction models independently
of specific agents, and (2) those that verify agent models whose constraints
shape the interactions. Interaction models are not sufficient when verifying dynamic
properties that depend on the agents engaged in an interaction. On the other hand, verifying
agent specifications, such as BDI models, is extremely inefficient. Specifications
are usually not explicit enough, resulting in the verification of a massive number of permissible
interactions. Furthermore, in open systems, an agent’s internal specification
is usually not accessible for many reasons, including security and privacy.
This thesis proposes a model checker that verifies a combination of a global interaction
model and local deontic models. The deontic model may be viewed as a list of
agent constraints that are deemed necessary to share and verify, such as the inability
of the buyer to pay by credit card. The result is a lightweight, efficient, and powerful
model checker that is capable of verifying rich properties of multiagent systems
without the need for accessing agents’ internal specifications.
Although the proposed model checker has potential for addressing a variety of
problems, the trust domain receives special attention due to the critically of the trust
issue in distributed open systems and the lack of reliable trust solutions. The thesis
illustrates how a dynamic model checker, using deontic/trust models, can help agents
decide whether the scenarios they wish to join are trustworthy or not.
In summary, the main contribution of this research is in introducing interaction time
verification for checking deontic and trust models multiagent interactions. When faced
with new unexplored interactions, agents can verify whether joining a given interaction
with a given set of collaborating agents would violate any of its constraints