74 research outputs found

    Large-Scale Distributed Coalition Formation

    Get PDF
    The CyberCraft project is an effort to construct a large scale Distributed Multi-Agent System (DMAS) to provide autonomous Cyberspace defense and mission assurance for the DoD. It employs a small but flexible agent structure that is dynamically reconfigurable to accommodate new tasks and policies. This document describes research into developing protocols and algorithms to ensure continued mission execution in a system of one million or more agents, focusing on protocols for coalition formation and Command and Control. It begins by building large-scale routing algorithms for a Hierarchical Peer to Peer structured overlay network, called Resource-Clustered Chord (RC-Chord). RC-Chord introduces the ability to efficiently locate agents by resources that agents possess. Combined with a task model defined for CyberCraft, this technology feeds into an algorithm that constructs task coalitions in a large-scale DMAS. Experiments reveal the flexibility and effectiveness of these concepts for achieving maximum work throughput in a simulated CyberCraft environment

    An Efficient Holistic Data Distribution and Storage Solution for Online Social Networks

    Get PDF
    In the past few years, Online Social Networks (OSNs) have dramatically spread over the world. Facebook [4], one of the largest worldwide OSNs, has 1.35 billion users, 82.2% of whom are outside the US [36]. The browsing and posting interactions (text content) between OSN users lead to user data reads (visits) and writes (updates) in OSN datacenters, and Facebook now serves a billion reads and tens of millions of writes per second [37]. Besides that, Facebook has become one of the top Internet traffic sources [36] by sharing tremendous number of large multimedia files including photos and videos. The servers in datacenters have limited resources (e.g. bandwidth) to supply latency efficient service for multimedia file sharing among the rapid growing users worldwide. Most online applications operate under soft real-time constraints (e.g., ≤ 300 ms latency) for good user experience, and its service latency is negatively proportional to its income. Thus, the service latency is a very important requirement for Quality of Service (QoS) to the OSN as a web service, since it is relevant to the OSN’s revenue and user experience. Also, to increase OSN revenue, OSN service providers need to constrain capital investment, operation costs, and the resource (bandwidth) usage costs. Therefore, it is critical for the OSN to supply a guaranteed QoS for both text and multimedia contents to users while minimizing its costs. To achieve this goal, in this dissertation, we address three problems. i) Data distribution among datacenters: how to allocate data (text contents) among data servers with low service latency and minimized inter-datacenter network load; ii) Efficient multimedia file sharing: how to facilitate the servers in datacenters to efficiently share multimedia files among users; iii) Cost minimized data allocation among cloud storages: how to save the infrastructure (datacenters) capital investment and operation costs by leveraging commercial cloud storage services. Data distribution among datacenters. To serve the text content, the new OSN model, which deploys datacenters globally, helps reduce service latency to worldwide distributed users and release the load of the existing datacenters. However, it causes higher inter-datacenter communica-tion load. In the OSN, each datacenter has a full copy of all data, and the master datacenter updates all other datacenters, generating tremendous load in this new model. The distributed data storage, which only stores a user’s data to his/her geographically closest datacenters, simply mitigates the problem. However, frequent interactions between distant users lead to frequent inter-datacenter com-munication and hence long service latencies. Therefore, the OSNs need a data allocation algorithm among datacenters with minimized network load and low service latency. Efficient multimedia file sharing. To serve multimedia file sharing with rapid growing user population, the file distribution method should be scalable and cost efficient, e.g. minimiza-tion of bandwidth usage of the centralized servers. The P2P networks have been widely used for file sharing among a large amount of users [58, 131], and meet both scalable and cost efficient re-quirements. However, without fully utilizing the altruism and trust among friends in the OSNs, current P2P assisted file sharing systems depend on strangers or anonymous users to distribute files that degrades their performance due to user selfish and malicious behaviors. Therefore, the OSNs need a cost efficient and trustworthy P2P-assisted file sharing system to serve multimedia content distribution. Cost minimized data allocation among cloud storages. The new trend of OSNs needs to build worldwide datacenters, which introduce a large amount of capital investment and maintenance costs. In order to save the capital expenditures to build and maintain the hardware infrastructures, the OSNs can leverage the storage services from multiple Cloud Service Providers (CSPs) with existing worldwide distributed datacenters [30, 125, 126]. These datacenters provide different Get/Put latencies and unit prices for resource utilization and reservation. Thus, when se-lecting different CSPs’ datacenters, an OSN as a cloud customer of a globally distributed application faces two challenges: i) how to allocate data to worldwide datacenters to satisfy application SLA (service level agreement) requirements including both data retrieval latency and availability, and ii) how to allocate data and reserve resources in datacenters belonging to different CSPs to minimize the payment cost. Therefore, the OSNs need a data allocation system distributing data among CSPs’ datacenters with cost minimization and SLA guarantee. In all, the OSN needs an efficient holistic data distribution and storage solution to minimize its network load and cost to supply a guaranteed QoS for both text and multimedia contents. In this dissertation, we propose methods to solve each of the aforementioned challenges in OSNs. Firstly, we verify the benefits of the new trend of OSNs and present OSN typical properties that lay the basis of our design. We then propose Selective Data replication mechanism in Distributed Datacenters (SD3) to allocate user data among geographical distributed datacenters. In SD3,a datacenter jointly considers update rate and visit rate to select user data for replication, and further atomizes a user’s different types of data (e.g., status update, friend post) for replication, making sure that a replica always reduces inter-datacenter communication. Secondly, we analyze a BitTorrent file sharing trace, which proves the necessity of proximity-and interest-aware clustering. Based on the trace study and OSN properties, to address the second problem, we propose a SoCial Network integrated P2P file sharing system for enhanced Efficiency and Trustworthiness (SOCNET) to fully and cooperatively leverage the common-interest, geographically-close and trust properties of OSN friends. SOCNET uses a hierarchical distributed hash table (DHT) to cluster common-interest nodes, and then further clusters geographically close nodes into a subcluster, and connects the nodes in a subcluster with social links. Thus, when queries travel along trustable social links, they also gain higher probability of being successfully resolved by proximity-close nodes, simultaneously enhancing efficiency and trustworthiness. Thirdly, to handle the third problem, we model the cost minimization problem under the SLA constraints using integer programming. According to the system model, we propose an Eco-nomical and SLA-guaranteed cloud Storage Service (ES3), which finds a data allocation and resource reservation schedule with cost minimization and SLA guarantee. ES3 incorporates (1) a data al-location and reservation algorithm, which allocates each data item to a datacenter and determines the reservation amount on datacenters by leveraging all the pricing policies; (2) a genetic algorithm based data allocation adjustment approach, which makes data Get/Put rates stable in each data-center to maximize the reservation benefit; and (3) a dynamic request redirection algorithm, which dynamically redirects a data request from an over-utilized datacenter to an under-utilized datacenter with sufficient reserved resource when the request rate varies greatly to further reduce the payment. Finally, we conducted trace driven experiments on a distributed testbed, PlanetLab, and real commercial cloud storage (Amazon S3, Windows Azure Storage and Google Cloud Storage) to demonstrate the efficiency and effectiveness of our proposed systems in comparison with other systems. The results show that our systems outperform others in the network savings and data distribution efficiency

    A Content-Addressable Network for Similarity Search in Metric Spaces

    Get PDF
    Because of the ongoing digital data explosion, more advanced search paradigms than the traditional exact match are needed for contentbased retrieval in huge and ever growing collections of data produced in application areas such as multimedia, molecular biology, marketing, computer-aided design and purchasing assistance. As the variety of data types is fast going towards creating a database utilized by people, the computer systems must be able to model human fundamental reasoning paradigms, which are naturally based on similarity. The ability to perceive similarities is crucial for recognition, classification, and learning, and it plays an important role in scientific discovery and creativity. Recently, the mathematical notion of metric space has become a useful abstraction of similarity and many similarity search indexes have been developed. In this thesis, we accept the metric space similarity paradigm and concentrate on the scalability issues. By exploiting computer networks and applying the Peer-to-Peer communication paradigms, we build a structured network of computers able to process similarity queries in parallel. Since no centralized entities are used, such architectures are fully scalable. Specifically, we propose a Peer-to-Peer system for similarity search in metric spaces called Metric Content-Addressable Network (MCAN) which is an extension of the well known Content-Addressable Network (CAN) used for hash lookup. A prototype implementation of MCAN was tested on real-life datasets of image features, protein symbols, and text — observed results are reported. We also compared the performance of MCAN with three other, recently proposed, distributed data structures for similarity search in metric spaces

    Flexible Application-Layer Multicast in Heterogeneous Networks

    Get PDF
    This work develops a set of peer-to-peer-based protocols and extensions in order to provide Internet-wide group communication. The focus is put to the question how different access technologies can be integrated in order to face the growing traffic load problem. Thereby, protocols are developed that allow autonomous adaptation to the current network situation on the one hand and the integration of WiFi domains where applicable on the other hand

    DISTRIBUTED MULTIDIMENSIONAL INDEXING FOR SCIENTIFIC DATA ANALYSIS APPLICATIONS

    Get PDF
    Scientific data analysis applications require large scale computing power to effectively service client queries and also require large storage repositories for datasets that are generated continually from sensors and simulations. These scientific datasets are growing in size every day, and are becoming truly enormous. The goal of this dissertation is to provide efficient multidimensional indexing techniques that aid in navigating distributed scientific datasets. In this dissertation, we show significant improvements in accessing distributed large scientific datasets. The first approach we took to improve access to subsets of large multidimensional scientific datasets, was data chunking. The contents of scientific data files typically are a collection of multidimensional arrays, along with the corresponding metadata. Data chunking groups data elements into small chunks of a fixed, but data-specific, size to take advantage of spatio-temporal locality since it is not efficient to index individual data elements of large scientific datasets. The second approach was the design of an efficient multidimensional index for scientific datasets. This work investigates how existing multidimensional indexing structures perform on chunked scientific datasets, and compares their performance with that of our own indexing structure, SH-trees. Since R-trees were proposed, various multidimensional indexing structures have been proposed. However, there are a relatively small number of studies focused on improving the performance of indexing geographically distributed datasets, especially across heterogeneous machines. As a third approach, in an attempt to accelerate indexing performance for distributed datasets, we proposed several distributed multidimensional indexing schemes: replicated centralized indexing, hierarchical two level indexing, and decentralized two level indexing. Our experimental results show that great performance improvements are gained from distribution of multidimensional index. However, the design choices for distributed indexing, such as replication, partitioning, and decentralization, must be carefully considered since they may decrease the overall performance in certain situations. Therefore, this work provides performance guidelines to aid in selecting the best distributed multidimensional indexing scheme for various systems and applications. Finally, we describe how a distributed multidimensional indexing scheme can be used by a distributed multiple query optimization middleware as a case-study application to generate better query plans by leveraging information about the contents of remote caches

    Top-k aggregation queries in large-scale distributed systems

    Get PDF
    Distributed top-k query processing has recently become an essential functionality in a large number of emerging application classes like Internet traffic monitoring and Peer-to-Peer Web search. This work addresses efficient algorithms for distributed top-k queries in wide-area networks where the index lists for the attribute values (or text terms) of a query are distributed across a number of data peers. More precisely, in this thesis, we make the following distributions: We present the family of KLEE algorithms that are a fundamental building-block towards efficient top-k query processing in distributed systems. We present means to model score distributions and show how these score models can be used to reason about parameter values that play an important role in the overall performance of KLEE. We present GRASS, a family of novel algorithms based on three optimization techniques significantly increased overall performance of KLEE and related algorithms. We present probabilistic guarantees for the result quality. Moreover, we present Minerva1, a distributed search engine. Minerva offers a highly distributed (in both the data dimension and the computational dimension), scalable, and efficient solution toward the development of internet-scale search engines.Top-k Anfragen spielen eine große Rolle in einer Vielzahl von Anwendungen, insbesondere im Bereich von Informationssystemen, bei denen eine kleine, sorgfältig ausgewählte Teilmenge der Ergebnisse den Benutzern präsentiert werden soll. Beispiele hierfür sind Suchmaschinen wie Google, Yahoo oder MSN. Obwohl die Forschung in diesem Bereich in den letzten Jahren große Fortschritte gemacht hat, haben Top-k-Anfragen in verteilten Systemen, bei denen die Daten auf verschiedenen Rechnern verteilt sind, vergleichsweise wenig Aufmerksamkeit erlangt. In dieser Arbeit beschäftigen wir uns mit der effizienten Verarbeitung eben dieser Anfragen. Die Hauptbeiträge gliedern sich wie folgt. Wir präsentieren KLEE, eine Familie neuartiger Top-k-Algorithmen. Wir entwickeln Modelle mit denen Datenverteilungen beschrieben werden können. Diese Modelle sind die Grundlage für eine Schätzung diverser Parameter, die einen großen Einfluss auf die Performanz von KLEE und anderen ähnlichen Algorithmen haben. Wir präsentieren GRASS, eine Familie von Algorithmen, basierend auf drei neuartigen Optimierungstechniken, mit denen die Performanz von KLEE und ähnlichen Algorithmen verbessert wird. Wir präsentieren probabilistische Garantien für die Ergebnisgüte. Wir präsentieren Minerva, eine neuartige verteilte Peer-to-Peer-Suchmaschine

    Algorithms for nonuniform networks

    Get PDF
    In this thesis, observations on structural properties of natural networks are taken as a starting point for developing efficient algorithms for natural instances of different graph problems. The key areas discussed are sampling, clustering, routing, and pattern mining for large, nonuniform graphs. The results include observations on structural effects together with algorithms that aim to reveal structural properties or exploit their presence in solving an interesting graph problem. Traditionally networks were modeled with uniform random graphs, assuming that each vertex was equally important and each edge equally likely to be present. Within the last decade, the approach has drastically changed due to the numerous observations on structural complexity in natural networks, many of which proved the uniform model to be inadequate for some contexts. This quickly lead to various models and measures that aim to characterize topological properties of different kinds of real-world networks also beyond the uniform networks. The goal of this thesis is to utilize such observations in algorithm design, in addition to empowering the process of network analysis. Knowing that a graph exhibits certain characteristics allows for more efficient storage, processing, analysis, and feature extraction. Our emphasis is on local methods that avoid resorting to information of the graph structure that is not relevant to the answer sought. For example, when seeking for the cluster of a single vertex, we compute it without using any global knowledge of the graph, iteratively examining the vicinity of the seed vertex. Similarly we propose methods for sampling and spanning-tree construction according to certain criteria on the outcome without requiring knowledge of the graph as a whole. Our motivation for concentrating on local methods is two-fold: one driving factor is the ever-increasing size of real-world problems, but an equally important fact is the nonuniformity present in many natural graph instances; properties that hold for the entire graph are often lost when only a small subgraph is examined.reviewe

    Κατανεμημένη αποτίμηση επερωτήσεων και συλλογιστική για το μοντέλο RDF σε δίκτυα ομοτίμων κόμβων

    Get PDF
    Με το ενδιαφέρον για τις εφαρμογές του Σημασιολογικού Ιστού να αυξάνεται ραγδαία, το μοντέλο RDF και RDFS έχει γίνει ένα από τα πιο ευρέως χρησιμοποιούμενα μοντέλα δεδομένων για την αναπαράσταση και την ενσωμάτωση δομημένης πληροφορίας στον Ιστό. Το πλήθος των διαθέσιμων πηγών πληροφορίας RDF συνεχώς αυξάνεται με αποτέλεσμα να υπάρχει μια επιτακτική ανάγκη για τη διαχείριση RDF δεδομένων. Σε αυτή τη διατριβή επικεντρωνόμαστε στην κατανεμημένη διαχείριση RDF δεδομένων σε δίκτυα ομότιμων κόμβων. Σχεδιάζουμε και υλοποιούμε το σύστημα Atlas, ένα πλήρως κατανεμημένο σύστημα για την αποθήκευση RDF και RDFS δεδομένων, την αποτίμηση και βελτιστοποίηση επερωτήσεων στη γλώσσα SPARQL και τη συλλογιστική στο μοντέλο RDFS. Το σύστημα Atlas χρησιμοποιεί κατανεμημένους πίνακες κατακερματισμού, μια δημοφιλή περίπτωση δικτύων ομότιμων κόμβων. Αρχικά, αναλύουμε κατανεμημένους αλγόριθμους για συλλογιστική RDFS χρησιμοποιώντας κατανεμημένους πίνακες κατακερματισμού. Υλοποιηούμε διάφορες παραλλαγές των αλγορίθμων προς τα εμπρός αλυσίδα εκτέλεσης και προς τα πίσω αλυσίδα εκτέλεσης καθώς και έναν αλγόριθμο που χρησιμοποιεί την τεχνική μετασχηματισμού των κανόνων σε μαγικό σύνολο. Αποδεικνύουμε θεωρητικά την ορθότητα των αλγορίθμων αυτών και προσφέρουμε μια συγκριτική μελέτη τόσο αναλυτικά όσο και πειραματικά. Παράλληλα, προτείνουμε αλγορίθμους και τεχνικές για την αποτίμηση και τη βελτιστοποίηση επερωτήσεων στη γλώσσα SPARQL για RDF δεδομένα που είναι αποθηκευμένα σε κατανεμημένους πίνακες κατακερματισμού. Οι τεχνικές βελτιστοποίησης βασίζονται σε εκτιμήσεις επιλεκτικότητας και έχουν στόχο τη μείωση του χρόνου απόκρισης της επερώτησης καθώς και της κατανάλωσης εύρους ζώνης του δικτύου. Η εκτεταμένη πειραματική αξιολόγηση των μεθόδων βελτιστοποίησης γίνεται σε μια τοπική συστάδα υπολογιστών χρησιμοποιώντας ένα ευρέως διαδεδομένο σημείο αναφοράς μετρήσεων.With the interest in Semantic Web applications rising rapidly, the Resource Description Framework (RDF) and its accompanying vocabulary description language, RDF Schema (RDFS), have become one of the most widely used data models for representing and integrating structured information in the Web. With the vast amount of available RDF data sources on the Web increasing rapidly, there is an urgent need for RDF data management. In this thesis, we focus on distributed RDF data management in peer-to-peer (P2P) networks. More specifically, we present results that advance the state-of-the-art in the research area of distributed RDF query processing and reasoning in P2P networks. We fully design and implement a P2P system, called Atlas, for the distributed query processing and reasoning of RDF and RDFS data. Atlas is built on top of distributed hash tables (DHTs), a commonly-used case of P2P networks. Initially, we study RDFS reasoning algorithms on top of DHTs. We design and develop distributed forward and backward chaining algorithms, as well as an algorithm which works in a bottom-up fashion using the magic sets transformation technique. We study theoretically the correctness of our reasoning algorithms and prove that they are sound and complete. We also provide a comparative study of our algorithms both analytically and experimentally. In the experimental part of our study, we obtain measurements in the realistic large-scale distributed environment of PlanetLab as well as in the more controlled environment of a local cluster. Moreover, we propose algorithms for SPARQL query processing and optimization over RDF(S) databases stored on top of distributed hash tables. We fully implement and evaluate a DHT-based optimizer. The goal of the optimizer is to minimize the time for answering a query as well as the bandwidth consumed during the query evaluation. The optimization algorithms use selectivity estimates to determine the chosen query plan. Our algorithms and techniques have been extensively evaluated in a local cluster
    corecore