213 research outputs found

    Enhancing trustability in MMOGs environments

    Get PDF
    Massively Multiplayer Online Games (MMOGs; e.g., World of Warcraft), virtual worlds (VW; e.g., Second Life), social networks (e.g., Facebook) strongly demand for more autonomic, security, and trust mechanisms in a way similar to humans do in the real life world. As known, this is a difficult matter because trusting in humans and organizations depends on the perception and experience of each individual, which is difficult to quantify or measure. In fact, these societal environments lack trust mechanisms similar to those involved in humans-to-human interactions. Besides, interactions mediated by compute devices are constantly evolving, requiring trust mechanisms that keep the pace with the developments and assess risk situations. In VW/MMOGs, it is widely recognized that users develop trust relationships from their in-world interactions with others. However, these trust relationships end up not being represented in the data structures (or databases) of such virtual worlds, though they sometimes appear associated to reputation and recommendation systems. In addition, as far as we know, the user is not provided with a personal trust tool to sustain his/her decision making while he/she interacts with other users in the virtual or game world. In order to solve this problem, as well as those mentioned above, we propose herein a formal representation of these personal trust relationships, which are based on avataravatar interactions. The leading idea is to provide each avatar-impersonated player with a personal trust tool that follows a distributed trust model, i.e., the trust data is distributed over the societal network of a given VW/MMOG. Representing, manipulating, and inferring trust from the user/player point of view certainly is a grand challenge. When someone meets an unknown individual, the question is “Can I trust him/her or not?”. It is clear that this requires the user to have access to a representation of trust about others, but, unless we are using an open source VW/MMOG, it is difficult —not to say unfeasible— to get access to such data. Even, in an open source system, a number of users may refuse to pass information about its friends, acquaintances, or others. Putting together its own data and gathered data obtained from others, the avatar-impersonated player should be able to come across a trust result about its current trustee. For the trust assessment method used in this thesis, we use subjective logic operators and graph search algorithms to undertake such trust inference about the trustee. The proposed trust inference system has been validated using a number of OpenSimulator (opensimulator.org) scenarios, which showed an accuracy increase in evaluating trustability of avatars. Summing up, our proposal aims thus to introduce a trust theory for virtual worlds, its trust assessment metrics (e.g., subjective logic) and trust discovery methods (e.g., graph search methods), on an individual basis, rather than based on usual centralized reputation systems. In particular, and unlike other trust discovery methods, our methods run at interactive rates.MMOGs (Massively Multiplayer Online Games, como por exemplo, World of Warcraft), mundos virtuais (VW, como por exemplo, o Second Life) e redes sociais (como por exemplo, Facebook) necessitam de mecanismos de confiança mais autónomos, capazes de assegurar a segurança e a confiança de uma forma semelhante à que os seres humanos utilizam na vida real. Como se sabe, esta não é uma questão fácil. Porque confiar em seres humanos e ou organizações depende da percepção e da experiência de cada indivíduo, o que é difícil de quantificar ou medir à partida. Na verdade, esses ambientes sociais carecem dos mecanismos de confiança presentes em interacções humanas presenciais. Além disso, as interacções mediadas por dispositivos computacionais estão em constante evolução, necessitando de mecanismos de confiança adequados ao ritmo da evolução para avaliar situações de risco. Em VW/MMOGs, é amplamente reconhecido que os utilizadores desenvolvem relações de confiança a partir das suas interacções no mundo com outros. No entanto, essas relações de confiança acabam por não ser representadas nas estruturas de dados (ou bases de dados) do VW/MMOG específico, embora às vezes apareçam associados à reputação e a sistemas de reputação. Além disso, tanto quanto sabemos, ao utilizador não lhe é facultado nenhum mecanismo que suporte uma ferramenta de confiança individual para sustentar o seu processo de tomada de decisão, enquanto ele interage com outros utilizadores no mundo virtual ou jogo. A fim de resolver este problema, bem como os mencionados acima, propomos nesta tese uma representação formal para essas relações de confiança pessoal, baseada em interacções avatar-avatar. A ideia principal é fornecer a cada jogador representado por um avatar uma ferramenta de confiança pessoal que segue um modelo de confiança distribuída, ou seja, os dados de confiança são distribuídos através da rede social de um determinado VW/MMOG. Representar, manipular e inferir a confiança do ponto de utilizador/jogador, é certamente um grande desafio. Quando alguém encontra um indivíduo desconhecido, a pergunta é “Posso confiar ou não nele?”. É claro que isto requer que o utilizador tenha acesso a uma representação de confiança sobre os outros, mas, a menos que possamos usar uma plataforma VW/MMOG de código aberto, é difícil — para não dizer impossível — obter acesso aos dados gerados pelos utilizadores. Mesmo em sistemas de código aberto, um número de utilizadores pode recusar partilhar informações sobre seus amigos, conhecidos, ou sobre outros. Ao juntar seus próprios dados com os dados obtidos de outros, o utilizador/jogador representado por um avatar deve ser capaz de produzir uma avaliação de confiança sobre o utilizador/jogador com o qual se encontra a interagir. Relativamente ao método de avaliação de confiança empregue nesta tese, utilizamos lógica subjectiva para a representação da confiança, e também operadores lógicos da lógica subjectiva juntamente com algoritmos de procura em grafos para empreender o processo de inferência da confiança relativamente a outro utilizador. O sistema de inferência de confiança proposto foi validado através de um número de cenários Open-Simulator (opensimulator.org), que mostrou um aumento na precisão na avaliação da confiança de avatares. Resumindo, a nossa proposta visa, assim, introduzir uma teoria de confiança para mundos virtuais, conjuntamente com métricas de avaliação de confiança (por exemplo, a lógica subjectiva) e em métodos de procura de caminhos de confiança (com por exemplo, através de métodos de pesquisa em grafos), partindo de uma base individual, em vez de se basear em sistemas habituais de reputação centralizados. Em particular, e ao contrário de outros métodos de determinação do grau de confiança, os nossos métodos são executados em tempo real

    Scalable and Distributed Resource Management Protocols for Cloud and Big Data Clusters

    Get PDF
    Cloud data centers require an operating system to manage resources and satisfy operational requirements and management objectives. The growth of popularity in cloud services causes the appearance of a new spectrum of services with sophisticated workload and resource management requirements. Also, data centers are growing by addition of various type of hardware to accommodate the ever-increasing requests of users. Nowadays a large percentage of cloud resources are executing data-intensive applications which need continuously changing workload fluctuations and specific resource management. To this end, cluster computing frameworks are shifting towards distributed resource management for better scalability and faster decision making. Such systems benefit from the parallelization of control and are resilient to failures. Throughout this thesis we investigate algorithms, protocols and techniques to address these challenges in large-scale data centers. We introduce a distributed resource management framework which consolidates virtual machine to as few servers as possible to reduce the energy consumption of data center and hence decrease the cost of cloud providers. This framework can characterize the workload of virtual machines and hence handle trade-off energy consumption and Service Level Agreement (SLA) of customers efficiently. The algorithm is highly scalable and requires low maintenance cost with dynamic workloads and it tries to minimize virtual machines migration costs. We also introduce a scalable and distributed probe-based scheduling algorithm for Big data analytics frameworks. This algorithm can efficiently address the problem job heterogeneity in workloads that has appeared after increasing the level of parallelism in jobs. The algorithm is massively scalable and can reduce significantly average job completion times in comparison with the-state of-the-art. Finally, we propose a probabilistic fault-tolerance technique as part of the scheduling algorithm

    A Survey on Consensus Mechanisms and Mining Strategy Management in Blockchain Networks

    Full text link
    © 2013 IEEE. The past decade has witnessed the rapid evolution in blockchain technologies, which has attracted tremendous interests from both the research communities and industries. The blockchain network was originated from the Internet financial sector as a decentralized, immutable ledger system for transactional data ordering. Nowadays, it is envisioned as a powerful backbone/framework for decentralized data processing and data-driven self-organization in flat, open-access networks. In particular, the plausible characteristics of decentralization, immutability, and self-organization are primarily owing to the unique decentralized consensus mechanisms introduced by blockchain networks. This survey is motivated by the lack of a comprehensive literature review on the development of decentralized consensus mechanisms in blockchain networks. In this paper, we provide a systematic vision of the organization of blockchain networks. By emphasizing the unique characteristics of decentralized consensus in blockchain networks, our in-depth review of the state-of-the-art consensus protocols is focused on both the perspective of distributed consensus system design and the perspective of incentive mechanism design. From a game-theoretic point of view, we also provide a thorough review of the strategy adopted for self-organization by the individual nodes in the blockchain backbone networks. Consequently, we provide a comprehensive survey of the emerging applications of blockchain networks in a broad area of telecommunication. We highlight our special interest in how the consensus mechanisms impact these applications. Finally, we discuss several open issues in the protocol design for blockchain consensus and the related potential research directions

    The Blockchain Imitation Game

    Full text link
    The use of blockchains for automated and adversarial trading has become commonplace. However, due to the transparent nature of blockchains, an adversary is able to observe any pending, not-yet-mined transactions, along with their execution logic. This transparency further enables a new type of adversary, which copies and front-runs profitable pending transactions in real-time, yielding significant financial gains. Shedding light on such "copy-paste" malpractice, this paper introduces the Blockchain Imitation Game and proposes a generalized imitation attack methodology called Ape. Leveraging dynamic program analysis techniques, Ape supports the automatic synthesis of adversarial smart contracts. Over a timeframe of one year (1st of August, 2021 to 31st of July, 2022), Ape could have yielded 148.96M USD in profit on Ethereum, and 42.70M USD on BNB Smart Chain (BSC). Not only as a malicious attack, we further show the potential of transaction and contract imitation as a defensive strategy. Within one year, we find that Ape could have successfully imitated 13 and 22 known Decentralized Finance (DeFi) attacks on Ethereum and BSC, respectively. Our findings suggest that blockchain validators can imitate attacks in real-time to prevent intrusions in DeFi

    Mind the Gap: Trade-Offs between Distributed Ledger Technology Characteristics

    Get PDF
    When developing peer-to-peer applications on Distributed Ledger Technology (DLT), a crucial decision is the selection of a suitable DLT design (e.g., Ethereum) because it is hard to change the underlying DLT design post hoc. To facilitate the selection of suitable DLT designs, we review DLT characteristics and identify trade-offs between them. Furthermore, we assess how DLT designs account for these trade-offs and we develop archetypes for DLT designs that cater to specific quality requirements. The main purpose of our article is to introduce scientific and practical audiences to the intricacies of DLT designs and to support development of viable applications on DLT

    Understanding and Hardening Blockchain Network Security Against Denial of Service Attacks

    Get PDF
    This thesis aims to examine the security of a blockchain\u27s communication network. A blockchain relies on a communication network to deliver transactions. Understanding and hardening the security of the communication network against Denial-of-Service (DoS) attacks are thus critical to the well-being of blockchain participants. Existing research has examined blockchain system security in various system components, including mining incentives, consensus protocols, and applications such as smart contracts. However, the security of a blockchain\u27s communication network remains understudied. In practice, a blockchain\u27s communication network typically consists of three services: RPC service, P2P network, and mempool. This thesis examines each service\u27s designs and implementations, discovers vulnerabilities that lead to DoS attacks, and uncovers the P2P network topology. Through systematic evaluations and measurements, the thesis confirms that real-world network services in Ethereum are vulnerable to DoS attacks, leading to a potential collapse of the Ethereum ecosystem. Besides, the uncovered P2P network topology in Ethereum mainnet suggests that critical nodes adopt a biased neighbor selection strategy in the mainnet. Finally, to fix the discovered vulnerabilities, practical mitigation solutions are proposed in this thesis to harden the security of Ethereum\u27s communication network

    Distributed k-ary System: Algorithms for Distributed Hash Tables

    Get PDF
    This dissertation presents algorithms for data structures called distributed hash tables (DHT) or structured overlay networks, which are used to build scalable self-managing distributed systems. The provided algorithms guarantee lookup consistency in the presence of dynamism: they guarantee consistent lookup results in the presence of nodes joining and leaving. Similarly, the algorithms guarantee that routing never fails while nodes join and leave. Previous algorithms for lookup consistency either suffer from starvation, do not work in the presence of failures, or lack proof of correctness. Several group communication algorithms for structured overlay networks are presented. We provide an overlay broadcast algorithm, which unlike previous algorithms avoids redundant messages, reaching all nodes in O(log n) time, while using O(n) messages, where n is the number of nodes in the system. The broadcast algorithm is used to build overlay multicast. We introduce bulk operation, which enables a node to efficiently make multiple lookups or send a message to all nodes in a specified set of identifiers. The algorithm ensures that all specified nodes are reached in O(log n) time, sending maximum O(log n) messages per node, regardless of the input size of the bulk operation. Moreover, the algorithm avoids sending redundant messages. Previous approaches required multiple lookups, which consume more messages and can render the initiator a bottleneck. Our algorithms are used in DHT-based storage systems, where nodes can do thousands of lookups to fetch large files. We use the bulk operation algorithm to construct a pseudo-reliable broadcast algorithm. Bulk operations can also be used to implement efficient range queries. Finally, we describe a novel way to place replicas in a DHT, called symmetric replication, that enables parallel recursive lookups. Parallel lookups are known to reduce latencies. However, costly iterative lookups have previously been used to do parallel lookups. Moreover, joins or leaves only require exchanging O(1) messages, while other schemes require at least log(f) messages for a replication degree of f. The algorithms have been implemented in a middleware called the Distributed k-ary System (DKS), which is briefly described

    Resilience-Building Technologies: State of Knowledge -- ReSIST NoE Deliverable D12

    Get PDF
    This document is the first product of work package WP2, "Resilience-building and -scaling technologies", in the programme of jointly executed research (JER) of the ReSIST Network of Excellenc

    Improved Cauchy Reed-Solomon Codes for Cloud Data Retrieval and Secured Data Storage using Role-Based Cryptographic Access and forensic investigation

    Get PDF
    Doling out client consent strategies to PC frameworks presents a huge test in guaranteeing legitimate approval, especially with the development of open frameworks and scattered stages like the cloud.  RBAC  has turned into a broadly involved strategy in cloud server applications because of its versatility. Granting access to cloud-stored data for investigating potential wrongdoings is crucial in computer forensic investigations. In cases where the cloud service provider's reliability is questionable, maintaining data confidentiality and establishing an efficient procedure for revoking access upon credential expiration is essential. As storage systems expand across vast networks, frequent component failures require stronger fault tolerance measures. Our work secure data-sharing system combines role (Authorized) based access control and AES encryption technology to provide safe key distribution and data sharing for dynamic groups. Data recovery entails protecting data dispersed over distributed systems by storing duplicate data and applying the erasure code technique. Erasure coding strategies, like Reed-Solomon codes, guarantee disc failure robustness while cutting down on data storage expenses dramatically. They do, however, also result in longer access times and more expensive repairs. Consequently, there has been a great deal of interest in academic and business circles for the investigation of novel coding strategies for cloud storage systems. The objective of this study is to present a novel coding method that utilizes the intricate Cauchy matrix in order to improve Reed-Solomon coding efficiency and strengthen fault tolerance

    Semantic search and composition in unstructured peer-to-peer networks

    Get PDF
    This dissertation focuses on several research questions in the area of semantic search and composition in unstructured peer-to-peer (P2P) networks. Going beyond the state of the art, the proposed semantic-based search strategy S2P2P offers a novel path-suggestion based query routing mechanism, providing a reasonable tradeoff between search performance and network traffic overhead. In addition, the first semantic-based data replication scheme DSDR is proposed. It enables peers to use semantic information to select replica numbers and target peers to address predicted future demands. With DSDR, k-random search can achieve better precision and recall than it can with a near-optimal non-semantic replication strategy. Further, this thesis introduces a functional automatic semantic service composition method, SPSC. Distinctively, it enables peers to jointly compose complex workflows with high cumulative recall but low network traffic overhead, using heuristic-based bidirectional haining and service memorization mechanisms. Its query branching method helps to handle dead-ends in a pruned search space. SPSC is proved to be sound and a lower bound of is completeness is given. Finally, this thesis presents iRep3D for semantic-index based 3D scene selection in P2P search. Its efficient retrieval scales to answer hybrid queries involving conceptual, functional and geometric aspects. iRep3D outperforms previous representative efforts in terms of search precision and efficiency.Diese Dissertation bearbeitet Forschungsfragen zur semantischen Suche und Komposition in unstrukturierten Peer-to-Peer Netzen(P2P). Die semantische Suchstrategie S2P2P verwendet eine neuartige Methode zur Anfrageweiterleitung basierend auf Pfadvorschlägen, welche den Stand der Wissenschaft übertrifft. Sie bietet angemessene Balance zwischen Suchleistung und Kommunikationsbelastung im Netzwerk. Außerdem wird das erste semantische System zur Datenreplikation genannt DSDR vorgestellt, welche semantische Informationen berücksichtigt vorhergesagten zukünftigen Bedarf optimal im P2P zu decken. Hierdurch erzielt k-random-Suche bessere Präzision und Ausbeute als mit nahezu optimaler nicht-semantischer Replikation. SPSC, ein automatisches Verfahren zur funktional korrekten Komposition semantischer Dienste, ermöglicht es Peers, gemeinsam komplexe Ablaufpläne zu komponieren. Mechanismen zur heuristischen bidirektionalen Verkettung und Rückstellung von Diensten ermöglichen hohe Ausbeute bei geringer Belastung des Netzes. Eine Methode zur Anfrageverzweigung vermeidet das Feststecken in Sackgassen im beschnittenen Suchraum. Beweise zur Korrektheit und unteren Schranke der Vollständigkeit von SPSC sind gegeben. iRep3D ist ein neuer semantischer Selektionsmechanismus für 3D-Modelle in P2P. iRep3D beantwortet effizient hybride Anfragen unter Berücksichtigung konzeptioneller, funktionaler und geometrischer Aspekte. Der Ansatz übertrifft vorherige Arbeiten bezüglich Präzision und Effizienz
    corecore