2 research outputs found

    WSTO: A classification-based ontology for managing trust in semantic web services

    No full text
    The aim of this paper is to provide a general ontology that allows the specification of trust requirements in the Semantic Web Services environment. Both client and Web Service can semantically describe their trust policies in two directions: first, each can expose their own guarantees to the environment, such as, security certification, execution parameters etc.; secondly, each can declare their trust preferences about other communication partners, by selecting (or creating) 'trust match criteria'. A reasoning module can evaluate trust promises and chosen criteria, in order to select a set of Web Services that fit with all trust requirements. We see the trust-based selection problem of Semantic Web Services as a classification task. The class of selected Semantic Web Services (SWSs) will represent the set of all SWSs that fit both client and Web Service exposed trust requirements. We strongly believe that trust perception changes in different contexts, and strictly depends on the goal that the requester would like to achieve. For this reason, in our ontology we emphasize first class entities "goal", "Web Service" and "user", and the relations occurring among them. Our approach implies a centralized trust-based broker, i.e. an agent able to reason on trust requirements and to mediate between goal and Web Service semantic descriptions. We adopt IRS-III as our prototypical trust-based broker

    A taxonomy of rational attacks

    No full text
    Abstract — For peer-to-peer services to be effective, participating nodes must cooperate, but in most scenarios a node represents a self-interested party and cooperation can neither be expected nor enforced. A reasonable assumption is that a large fraction of p2p nodes are rational and will attempt to maximize their consumption of system resources while minimizing the use of their own. If such behavior violates system policy then it constitutes an attack. In this paper we identify and create a taxonomy for rational attacks and then identify corresponding solutions if they exist. The most effective solutions directly incentivize cooperative behavior, but when this is not feasible the common alternative is to incentivize evidence of cooperation instead.
    corecore