1,110 research outputs found
Prevention and Trust Evaluation Scheme Based on Interpersonal Relationships for Large-Scale Peer-To-Peer Networks
Peer reviewedPublisher PD
Collusion in Peer-to-Peer Systems
Peer-to-peer systems have reached a widespread use, ranging from academic and industrial applications to home entertainment. The key advantage of this paradigm lies in its scalability and flexibility, consequences of the participants sharing their resources for the common welfare. Security in such systems is a desirable goal. For example, when mission-critical operations or bank transactions are involved, their effectiveness strongly depends on the perception that users have about the system dependability and trustworthiness. A major threat to the security of these systems is the phenomenon of collusion. Peers can be selfish colluders, when they try to fool the system to gain unfair advantages over other peers, or malicious, when their purpose is to subvert the system or disturb other users. The problem, however, has received so far only a marginal attention by the research community. While several solutions exist to counter attacks in peer-to-peer systems, very few of them are meant to directly counter colluders and their attacks. Reputation, micro-payments, and concepts of game theory are currently used as the main means to obtain fairness in the usage of the resources. Our goal is to provide an overview of the topic by examining the key issues involved. We measure the relevance of the problem in the current literature and the effectiveness of existing philosophies against it, to suggest fruitful directions in the further development of the field
Prevention and trust evaluation scheme based on interpersonal relationships for large-scale peer-to-peer networks
In recent years, the complex network as the frontier of complex system has received more and more attention. Peer-to-peer (P2P) networks with openness, anonymity, and dynamic nature are vulnerable and are easily attacked by peers with malicious behaviors. Building trusted relationships among peers in a large-scale distributed P2P system is a fundamental and challenging research topic. Based on interpersonal relationships among peers of large-scale P2P networks, we present prevention and trust evaluation scheme, called IRTrust. The framework incorporates a strategy of identity authentication and a global trust of peers to improve the ability of resisting the malicious behaviors. It uses the quality of service (QoS), quality of recommendation (QoR), and comprehensive risk factor to evaluate the trustworthiness of a peer, which is applicable for large-scale unstructured P2P networks. The proposed IRTrust can defend against several kinds of malicious attacks, such as simple malicious attacks, collusive attacks, strategic attacks, and sybil attacks. Our simulation results show that the proposed scheme provides greater accuracy and stronger resistance compared with existing global trust schemes. The proposed scheme has potential application in secure P2P network coding
A Transparent, Reputation-Based Architecture for Semantic Web Annotation
New forms of conceiving the web such as web 2.0 and the semantic web have
emerged for numerous purposes ranging from professional activities to leisure.
The semantic web is based on associating concepts with web pages, rather than
only identifying hyperlinks and repeated literals. ITACA is a project whose aim
is to add semantic annotations to web pages, where semantic annotations are
Wikipedia URLs. Therefore, users can write, read and vote on semantic annotations
of a webpage. Semantic annotations of a webpage are ranked according
to users' votes. Building upon the ITACA project, we propose a transparent,
reputation-based architecture. With this proposal, semantic annotations are
stored in the users' local machines instead of web servers, so that web servers
transparency is preserved. To achieve transparency, an indexing server is added
to the architecture to locate semantic annotations. Moreover, users are grouped
into reputation domains, providing accurate semantic annotation ranking when
retrieving annotations of a web page. Cache copies of semantic annotations in
annotation servers are done to improve eficiency of the algorithm, reducing the
number of sent messages
Towards Secure Blockchain-enabled Internet of Vehicles: Optimizing Consensus Management Using Reputation and Contract Theory
In Internet of Vehicles (IoV), data sharing among vehicles is essential to
improve driving safety and enhance vehicular services. To ensure data sharing
security and traceability, highefficiency Delegated Proof-of-Stake consensus
scheme as a hard security solution is utilized to establish blockchain-enabled
IoV (BIoV). However, as miners are selected from miner candidates by
stake-based voting, it is difficult to defend against voting collusion between
the candidates and compromised high-stake vehicles, which introduces serious
security challenges to the BIoV. To address such challenges, we propose a soft
security enhancement solution including two stages: (i) miner selection and
(ii) block verification. In the first stage, a reputation-based voting scheme
for the blockchain is proposed to ensure secure miner selection. This scheme
evaluates candidates' reputation by using both historical interactions and
recommended opinions from other vehicles. The candidates with high reputation
are selected to be active miners and standby miners. In the second stage, to
prevent internal collusion among the active miners, a newly generated block is
further verified and audited by the standby miners. To incentivize the standby
miners to participate in block verification, we formulate interactions between
the active miners and the standby miners by using contract theory, which takes
block verification security and delay into consideration. Numerical results
based on a real-world dataset indicate that our schemes are secure and
efficient for data sharing in BIoV.Comment: 12 pages, submitted for possible journal publicatio
A Formal Framework for Modeling Trust and Reputation in Collective Adaptive Systems
Trust and reputation models for distributed, collaborative systems have been
studied and applied in several domains, in order to stimulate cooperation while
preventing selfish and malicious behaviors. Nonetheless, such models have
received less attention in the process of specifying and analyzing formally the
functionalities of the systems mentioned above. The objective of this paper is
to define a process algebraic framework for the modeling of systems that use
(i) trust and reputation to govern the interactions among nodes, and (ii)
communication models characterized by a high level of adaptiveness and
flexibility. Hence, we propose a formalism for verifying, through model
checking techniques, the robustness of these systems with respect to the
typical attacks conducted against webs of trust.Comment: In Proceedings FORECAST 2016, arXiv:1607.0200
- …