2 research outputs found

    Determining the Trustworthiness of New Electronic Contracts

    Get PDF
    Expressing contractual agreements electronically potentially allows agents to automatically perform functions surrounding contract use: establish- ment, fulfilment, renegotiation etc. For such automation to be used for real busi- ness concerns, there needs to be a high level of trust in the agent-based system. While there has been much research on simulating trust between agents, there are areas where such trust is harder to establish. In particular, contract proposals may come from parties that an agent has had no prior interaction with and, in competitive business-to-business environments, little reputation information may be available. In human practice, trust in a proposed contract is determined in part from the content of the proposal itself, and the similarity of the content to that of prior contracts, executed to varying degrees of success. In this paper, we argue that such analysis is also appropriate in automated systems, and to provide it we need systems to record salient details of prior contract use and algorithms for as- sessing proposals on their content.We use provenance technology to provide the former and detail algorithms for measuring contract success and similarity for the latter, applying them to an aerospace case study

    Modeling and Verifying Probabilistic Social Commitments in Multi-Agent Systems

    Get PDF
    Interaction among autonomous agents in Multi-Agent Systems (MASs) is the key aspect for solving complex problems that an individual agent cannot handle alone. In this context, social approaches, as opposed to the mental approaches, have recently received a considerable attention in the area of agent communication. They exploit observable social commitments to develop a verifiable formal semantics by which communication protocols can be specified. However, existing approaches for defining social commitments tend to assume an absolute guarantee of correctness so that systems run in a certain manner. That is, social commitments have always been modeled with the assumption of certainty. Moreover, the widespread use of MASs increases the interest to explore the interactions between different aspects of the participating agents such as the interaction between agents’ knowledge and social commitments in the presence of uncertainty. This results in having a gap, in the literature of agent communication, on modeling and verifying social commitments in probabilistic settings. In this thesis, we aim to address the above-mentioned problems by presenting a practical formal framework that is capable of handling the problem of uncertainty in social commitments. First, we develop an approach for representing, reasoning about, and verifying probabilistic social commitments in MASs. This includes defining a new logic called the probabilistic logic of commitments (PCTLC), and a reduction-based model checking procedure for verifying the proposed logic. In the reduction technique, the problem of model checking PCTLC is transformed into the problem of model checking PCTL so that the use of the PRISM (Probabilistic Symbolic Model Checker) is made possible. Formulae of PCTLC are interpreted over an extended version of the probabilistic interpreted systems formalism. Second, we extend the work we proposed for probabilistic social commitments to be able to capture and verify the interactions between knowledge and commitments. Properties representing the interactions between the two aspects are expressed in a new developed logic called the probabilistic logic of knowledge and commitment (PCTLkc). Third, we develop an adequate semantics for the group social commitments, for the first time in the literature, and integrate it into the framework. We then introduce an improved version of PCTLkc and extend it with operators for the group knowledge and group social commitments. The new refined logic is called PCTLkc+. In each of the latter stages, we respectively develop a new version of the probabilistic interpreted systems over which the presented logic is interpreted, and introduce a new reduction-based verification technique to verify the proposed logic. To evaluate our proposed work, we implement the proposed verification techniques on top of the PRISM model checker and apply them on several case studies. The results demonstrate the usefulness and effectiveness of our proposed work
    corecore