9,739 research outputs found

    A Formal Framework for Modeling Trust and Reputation in Collective Adaptive Systems

    Get PDF
    Trust and reputation models for distributed, collaborative systems have been studied and applied in several domains, in order to stimulate cooperation while preventing selfish and malicious behaviors. Nonetheless, such models have received less attention in the process of specifying and analyzing formally the functionalities of the systems mentioned above. The objective of this paper is to define a process algebraic framework for the modeling of systems that use (i) trust and reputation to govern the interactions among nodes, and (ii) communication models characterized by a high level of adaptiveness and flexibility. Hence, we propose a formalism for verifying, through model checking techniques, the robustness of these systems with respect to the typical attacks conducted against webs of trust.Comment: In Proceedings FORECAST 2016, arXiv:1607.0200

    Ignore These At Your Peril: Ten principles for trust design

    Get PDF
    Online trust has been discussed for more than 10 years, yet little practical guidance has emerged that has proven to be applicable across contexts or useful in the long run. 'Trustworthy UI design guidelines' created in the late 90ies to address the then big question of online trust: how to get shoppers online, are now happily employed by people preparing phishing scams. In this paper we summarize, in practical terms, a conceptual framework for online trust we've established in 2005. Because of its abstract nature it is still useful as a lens through which to view the current big questions of the online trust debate - large focused on usable security and phishing attacks. We then deduct practical 10 rules for providing effective trust support to help practitioners and researchers of usable security

    Collusion in Peer-to-Peer Systems

    Get PDF
    Peer-to-peer systems have reached a widespread use, ranging from academic and industrial applications to home entertainment. The key advantage of this paradigm lies in its scalability and flexibility, consequences of the participants sharing their resources for the common welfare. Security in such systems is a desirable goal. For example, when mission-critical operations or bank transactions are involved, their effectiveness strongly depends on the perception that users have about the system dependability and trustworthiness. A major threat to the security of these systems is the phenomenon of collusion. Peers can be selfish colluders, when they try to fool the system to gain unfair advantages over other peers, or malicious, when their purpose is to subvert the system or disturb other users. The problem, however, has received so far only a marginal attention by the research community. While several solutions exist to counter attacks in peer-to-peer systems, very few of them are meant to directly counter colluders and their attacks. Reputation, micro-payments, and concepts of game theory are currently used as the main means to obtain fairness in the usage of the resources. Our goal is to provide an overview of the topic by examining the key issues involved. We measure the relevance of the problem in the current literature and the effectiveness of existing philosophies against it, to suggest fruitful directions in the further development of the field

    Trusted community : a novel multiagent organisation for open distributed systems

    Get PDF
    [no abstract

    Anonymous reputation based reservations in e-commerce (AMNESIC)

    Get PDF
    Online reservation systems have grown over the last recent years to facilitate the purchase of goods and services. Generally, reservation systems require that customers provide some personal data to make a reservation effective. With this data, service providers can check the consumer history and decide if the user is trustable enough to get the reserve. Although the reputation of a user is a good metric to implement the access control of the system, providing personal and sensitive data to the system presents high privacy risks, since the interests of a user are totally known and tracked by an external entity. In this paper we design an anonymous reservation protocol that uses reputations to profile the users and control their access to the offered services, but at the same time it preserves their privacy not only from the seller but the service provider

    Resilient networking in wireless sensor networks

    Get PDF
    This report deals with security in wireless sensor networks (WSNs), especially in network layer. Multiple secure routing protocols have been proposed in the literature. However, they often use the cryptography to secure routing functionalities. The cryptography alone is not enough to defend against multiple attacks due to the node compromise. Therefore, we need more algorithmic solutions. In this report, we focus on the behavior of routing protocols to determine which properties make them more resilient to attacks. Our aim is to find some answers to the following questions. Are there any existing protocols, not designed initially for security, but which already contain some inherently resilient properties against attacks under which some portion of the network nodes is compromised? If yes, which specific behaviors are making these protocols more resilient? We propose in this report an overview of security strategies for WSNs in general, including existing attacks and defensive measures. In this report we focus at the network layer in particular, and an analysis of the behavior of four particular routing protocols is provided to determine their inherent resiliency to insider attacks. The protocols considered are: Dynamic Source Routing (DSR), Gradient-Based Routing (GBR), Greedy Forwarding (GF) and Random Walk Routing (RWR)
    corecore