1,472 research outputs found
Peer-to-Peer Networks and Computation: Current Trends and Future Perspectives
This research papers examines the state-of-the-art in the area of P2P networks/computation. It attempts to identify the challenges that confront the community of P2P researchers and developers, which need to be addressed before the potential of P2P-based systems, can be effectively realized beyond content distribution and file-sharing applications to build real-world, intelligent and commercial software systems. Future perspectives and some thoughts on the evolution of P2P-based systems are also provided
A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing
Data Grids have been adopted as the platform for scientific communities that
need to share, access, transport, process and manage large data collections
distributed worldwide. They combine high-end computing technologies with
high-performance networking and wide-area storage management techniques. In
this paper, we discuss the key concepts behind Data Grids and compare them with
other data sharing and distribution paradigms such as content delivery
networks, peer-to-peer networks and distributed databases. We then provide
comprehensive taxonomies that cover various aspects of architecture, data
transportation, data replication and resource allocation and scheduling.
Finally, we map the proposed taxonomy to various Data Grid systems not only to
validate the taxonomy but also to identify areas for future exploration.
Through this taxonomy, we aim to categorise existing systems to better
understand their goals and their methodology. This would help evaluate their
applicability for solving similar problems. This taxonomy also provides a "gap
analysis" of this area through which researchers can potentially identify new
issues for investigation. Finally, we hope that the proposed taxonomy and
mapping also helps to provide an easy way for new practitioners to understand
this complex area of research.Comment: 46 pages, 16 figures, Technical Repor
Privacy-preserving overgrid: Secure data collection for the smart grid
In this paper, we present a privacy-preserving scheme for Overgrid, a fully distributed peer-to-peer (P2P) architecture designed to automatically control and implement distributed Demand Response (DR) schemes in a community of smart buildings with energy generation and storage capabilities. To monitor the power consumption of the buildings, while respecting the privacy of the users, we extend our previous Overgrid algorithms to provide privacy preserving data aggregation (PP-Overgrid). This new technique combines a distributed data aggregation scheme with the Secure Multi-Party Computation paradigm. First, we use the energy profiles of hundreds of buildings, classifying the amount of “flexible” energy consumption, i.e., the quota which could be potentially exploited for DR programs. Second, we consider renewable energy sources and apply the DR scheme to match the flexible consumption with the available energy. Finally, to show the feasibility of our approach, we validate the PP-Overgrid algorithm in simulation for a large network of smart buildings
Privacy-preserving Overgrid: Secure Data Collection for the Smart Grid
In this paper we present a privacy-preserving scheme for Overgrid, a fully distributed peer-to-peer (P2P) architecture designed to automatically control and implement distributed Demand Response (DR) schemes in a community of smart buildings with energy generation and storage capabilities. To monitor the power consumption of the buildings, while respecting the privacy of the users, we extend our previous Overgrid algorithms to provide privacy preserving data aggregation ( extit{PP-Overgrid}). This new technique combines a distributed data aggregation scheme with the Secure Multi-Party Computation paradigm.
First, we use the energy profiles of hundreds of buildings, classifying the amount of ``flexible'' energy consumption, i.e. the quota which could be potentially exploited for DR programs. Second, we consider renewable energy sources and apply the DR scheme to match the flexible consumption with the available energy. Finally, to show the feasibility of our approach, we validate the PP-Overgrid algorithm in simulation for a large network of smart buildin
Blockchain based Decentralized Applications: Technology Review and Development Guidelines
Blockchain or Distributed Ledger Technology is a disruptive technology that
provides the infrastructure for developing decentralized applications enabling
the implementation of novel business models even in traditionally centralized
domains. In the last years it has drawn high interest from the academic
community, technology developers and startups thus lots of solutions have been
developed to address blockchain technology limitations and the requirements of
applications software engineering. In this paper, we provide a comprehensive
overview of DLT solutions analyzing the addressed challenges, provided
solutions and their usage for developing decentralized applications. Our study
reviews over 100 blockchain papers and startup initiatives from which we
construct a 3-tier based architecture for decentralized applications and we use
it to systematically classify the technology solutions. Protocol and Network
Tier solutions address the digital assets registration, transactions, data
structure, and privacy and business rules implementation and the creation of
peer-to-peer networks, ledger replication, and consensus-based state
validation. Scaling Tier solutions address the scalability problems in terms of
storage size, transaction throughput, and computational capability. Finally,
Federated Tier aggregates integrative solutions across multiple blockchain
applications deployments. The paper closes with a discussion on challenges and
opportunities for developing decentralized applications by providing a
multi-step guideline for decentralizing the design of traditional systems and
implementing decentralized applications.Comment: 30 pages, 8 figures, 9 tables, 121 reference
Contributions to High-Throughput Computing Based on the Peer-to-Peer Paradigm
XII, 116 p.This dissertation focuses on High Throughput Computing (HTC) systems and how to build a working HTC system using Peer-to-Peer (P2P) technologies. The traditional HTC systems, designed to process the largest possible number of tasks per unit of time, revolve around a central node that implements a queue used to store and manage submitted tasks. This central node limits the scalability and fault tolerance of the HTC system. A usual solution involves the utilization of replicas of the master node that can replace it. This solution is, however, limited by the number of replicas used. In this thesis, we propose an alternative solution that follows the P2P philosophy: a completely distributed system in which all worker nodes participate in the scheduling tasks, and with a physically distributed task queue implemented on top of a P2P storage system. The fault tolerance and scalability of this proposal is, therefore, limited only by the number of nodes in the system. The proper operation and scalability of our proposal have been validated through experimentation with a real system. The data availability provided by Cassandra, the P2P data management framework used in our proposal, is analysed by means of several stochastic models. These models can be used to make predictions about the availability of any Cassandra deployment, as well as to select the best possible con guration of any Cassandra system. In order to validate the proposed models, an experimentation with real Cassandra clusters is made, showing that our models are good descriptors of Cassandra's availability. Finally, we propose a set of scheduling policies that try to solve a common problem of HTC systems: re-execution of tasks due to a failure in the node where the task was running, without additional resource misspending. In order to reduce the number of re-executions, our proposals try to nd good ts between the reliability of nodes and the estimated length of each task. An extensive simulation-based experimentation shows that our policies are capable of reducing the number of re-executions, improving system performance and utilization of nodes
- …