3 research outputs found

    Multi-criteria decision analysis for non-conformance diagnosis: A priority-based strategy combining data and business rules

    Get PDF
    Business process analytics and verification have become a major challenge for companies, especially when process data is stored across different systems. It is important to ensure Business Process Compliance in both data-flow perspectives and business rules that govern the organisation. In the verification of data-flow accuracy, the conformance of data to business rules is a key element, since essential to fulfil policies and statements that govern corporate behaviour. The inclusion of business rules in an existing and already deployed process, which therefore already counts on stored data, requires the checking of business rules against data to guarantee compliance. If inconsistency is detected then the source of the problem should be determined, by discerning whether it is due to an erroneous rule or to erroneous data. To automate this, a diagnosis methodology following the incorporation of business rules is proposed, which simultaneously combines business rules and data produced during the execution of the company processes. Due to the high number of possible explanations of faults (data and/or business rules), the likelihood of faults has been included to propose an ordered list. In order to reduce these possibilities, we rely on the ranking calculated by means of an AHP (Analytic Hierarchy Process) and incorporate the experience described by users and/or experts. The methodology proposed is based on the Constraint Programming paradigm which is evaluated using a real example. .Ministerio de Ciencia y Tecnología RTI2018–094283-B-C3

    Specification and automatic verification of trust-based multi-agent systems

    Get PDF
    We present a new logic-based framework for modeling and automatically verifying trust in Multi-Agent Systems (MASs). We start by refining TCTL, a temporal logic of trust that extends the Computation Tree Logic (CTL) to enable reasoning about trust with preconditions. A new vector-based version of interpreted systems is defined to capture the trust relationship between the interacting parties. We introduce a set of reasoning postulates along with formal proofs to support our logic. Moreover, we present new symbolic model checking algorithms to formally and automatically verify the system under consideration against some desirable properties expressed using the proposed logic. We fully implemented our proposed algorithms as a model checker tool called MCMAS-T on top of the MCMAS model checker for MASs along with its new input language VISPL (Vector-extended ISPL). We evaluated the tool and reported experimental results using a real-life scenario in the healthcare field

    Model Checking Trust-based Multi-Agent Systems

    Get PDF
    Trust has been the focus of many research projects, both theoretical and practical, in the recent years, particularly in domains where open multi-agent technologies are applied (e.g., Internet-based markets, Information retrieval, etc.). The importance of trust in such domains arises mainly because it provides a social control that regulates the relationships and interactions among agents. Despite the growing number of various multi-agent applications, they still encounter many challenges in their formal modeling and the verification of agents’ behaviors. Many formalisms and approaches that facilitate the specifications of trust in Multi-Agent Systems (MASs) can be found in the literature. However, most of these approaches focus on the cognitive side of trust where the trusting entity is normally capable of exhibiting properties about beliefs, desires, and intentions. Hence, the trust is considered as a belief of an agent (the truster) involving ability and willingness of the trustee to perform some actions for the truster. Nevertheless, in open MASs, entities can join and leave the interactions at any time. This means MASs will actually provide no guarantee about the behavior of their agents, which makes the capability of reasoning about trust and checking the existence of untrusted computations highly desired. This thesis aims to address the problem of modeling and verifying at design time trust in MASs by (1) considering a cognitive-independent view of trust where trust ingredients are seen from a non-epistemic angle, (2) introducing a logical language named Trust Computation Tree Logic (TCTL), which extends CTL with preconditional, conditional, and graded trust operators along with a set of reasoning postulates in order to explore its capabilities, (3) proposing a new accessibility relation which is needed to define the semantics of the trust modal operators. This accessibility relation is defined so that it captures the intuition of trust while being easily computable, (4) investigating the most intuitive and efficient algorithm for computing the trust set by developing, implementing, and experimenting different model checking techniques in order to compare between them in terms of memory consumption, efficiency, and scalability with regard to the number of considered agents, (5) evaluating the performance of the model checking techniques by analyzing the time and space complexity. The approach has been applied to different application domains to evaluate its computational performance and scalability. The obtained results reveal the effectiveness of the proposed approach, making it a promising methodology in practice
    corecore