1,653 research outputs found

    Large-Scale Storage and Reasoning for Semantic Data Using Swarms

    Get PDF
    Scalable, adaptive and robust approaches to store and analyze the massive amounts of data expected from Semantic Web applications are needed to bring the Web of Data to its full potential. The solution at hand is to distribute both data and requests onto multiple computers. Apart from storage, the annotation of data with machine-processable semantics is essential for realizing the vision of the Semantic Web. Reasoning on webscale data faces the same requirements as storage. Swarm-based approaches have been shown to produce near-optimal solutions for hard problems in a completely decentralized way. We propose a novel concept for reasoning within a fully distributed and self-organized storage system that is based on the collective behavior of swarm individuals and does not require any schema replication. We show the general feasibility and efficiency of our approach with a proof-of-concept experiment of storage and reasoning performance. Thereby, we positively answer the research question of whether swarm-based approaches are useful in creating a large-scale distributed storage and reasoning system. © 2012 IEEE

    Small degree BitTorrent

    Get PDF
    It is well-known that the BitTorrent file sharing protocol is responsible for a significant portion of the Internet traffic. A large amount of work has been devoted to reducing the footprint of the protocol in terms of the amount of traffic, however, its flow level footprint has not been studied in depth. We argue in this paper that the large amount of flows that a BitTorrent client maintains will not scale over a certain point. To solve this problem, we first examine the flow structure through realistic simulations. We find that only a few TCP connections are used frequently for data transfer, while most of the connections are used mostly for signaling. This makes it possible to separate the data and signaling paths. We propose that, as the signaling traffic provides little overhead, it should be transferred on a separate dedicated small degree overlay while the data traffic should utilize temporal TCP sockets active only during the data transfer. Through simulation we show that this separation has no significant effect on the performance of the BitTorrent protocol while we can drastically reduce the number of actual flows

    Experimental comparison of neighborhood filtering strategies in unstructured P2P-TV systems

    Get PDF
    P2P-TV systems performance are driven by the overlay topology that peers form. Several proposals have been made in the past to optimize it, yet little experimental studies have corroborated results. The aim of this work is to provide a comprehensive experimental comparison of different strategies for the construction and maintenance of the overlay topology in P2P-TV systems. To this goal, we have implemented different fully-distributed strategies in a P2P-TV application, called Peer- Streamer, that we use to run extensive experimental campaigns in a completely controlled set-up which involves thousands of peers, spanning very different networking scenarios. Results show that the topological properties of the overlay have a deep impact on both user quality of experience and network load. Strategies based solely on random peer selection are greatly outperformed by smart, yet simple strategies that can be implemented with negligible overhead. Even with different and complex scenarios, the neighborhood filtering strategy we devised as most perform- ing guarantees to deliver almost all chunks to all peers with a play-out delay as low as only 6s even with system loads close to 1.0. Results are confirmed by running experiments on PlanetLab. PeerStreamer is open-source to make results reproducible and allow further research by the communit

    The Cooperative Defense Overlay Network: A Collaborative Automated Threat Information Sharing Framework for a Safer Internet

    Get PDF
    With the ever-growing proliferation of hardware and software-based computer security exploits and the increasing power and prominence of distributed attacks, network and system administrators are often forced to make a difficult decision: expend tremendous resources on defense from sophisticated and continually evolving attacks from an increasingly dangerous Internet with varying levels of success; or expend fewer resources on defending against common attacks on "low hanging fruit," hoping to avoid the less common but incredibly devastating zero-day worm or botnet attack. Home networks and small organizations are usually forced to choose the latter option and in so doing are left vulnerable to all but the simplest of attacks. While automated tools exist for sharing information about network-based attacks, this sharing is typically limited to administrators of large networks and dedicated security-conscious users, to the exclusion of smaller organizations and novice home users. In this thesis we propose a framework for a cooperative defense overlay network (CODON) in which participants with varying technical abilities and resources can contribute to the security and health of the internet via automated crowdsourcing, rapid information sharing, and the principle of collateral defense
    corecore