149 research outputs found

    CLOSER: A Collaborative Locality-aware Overlay SERvice

    Get PDF
    Current Peer-to-Peer (P2P) file sharing systems make use of a considerable percentage of Internet Service Providers (ISPs) bandwidth. This paper presents the Collaborative Locality-aware Overlay SERvice (CLOSER), an architecture that aims at lessening the usage of expensive international links by exploiting traffic locality (i.e., a resource is downloaded from the inside of the ISP whenever possible). The paper proves the effectiveness of CLOSER by analysis and simulation, also comparing this architecture with existing solutions for traffic locality in P2P systems. While savings on international links can be attractive for ISPs, it is necessary to offer some features that can be of interest for users to favor a wide adoption of the application. For this reason, CLOSER also introduces a privacy module that may arouse the users' interest and encourage them to switch to the new architectur

    Content-Centric Networking at Internet Scale through The Integration of Name Resolution and Routing

    Full text link
    We introduce CCN-RAMP (Routing to Anchors Matching Prefixes), a new approach to content-centric networking. CCN-RAMP offers all the advantages of the Named Data Networking (NDN) and Content-Centric Networking (CCNx) but eliminates the need to either use Pending Interest Tables (PIT) or lookup large Forwarding Information Bases (FIB) listing name prefixes in order to forward Interests. CCN-RAMP uses small forwarding tables listing anonymous sources of Interests and the locations of name prefixes. Such tables are immune to Interest-flooding attacks and are smaller than the FIBs used to list IP address ranges in the Internet. We show that no forwarding loops can occur with CCN-RAMP, and that Interests flow over the same routes that NDN and CCNx would maintain using large FIBs. The results of simulation experiments comparing NDN with CCN-RAMP based on ndnSIM show that CCN-RAMP requires forwarding state that is orders of magnitude smaller than what NDN requires, and attains even better performance

    Enhancing P2P File-Sharing with an Internet-Scale Query Processor

    Get PDF

    Confidential Data-Outsourcing and Self-Optimizing P2P-Networks: Coping with the Challenges of Multi-Party Systems

    Get PDF
    This work addresses the inherent lack of control and trust in Multi-Party Systems at the examples of the Database-as-a-Service (DaaS) scenario and public Distributed Hash Tables (DHTs). In the DaaS field, it is shown how confidential information in a database can be protected while still allowing the external storage provider to process incoming queries. For public DHTs, it is shown how these highly dynamic systems can be managed by facilitating monitoring, simulation, and self-adaptation

    JANUS: A Framework for Distributed Management of Wireless Mesh Networks

    Full text link
    Abstract — Wireless Mesh Networks (WMNs) are emerging as a potentially attractive access architecture for metropolitan-scale networks. While research on WMNs has been up to a large extent confined to the study of efficient routing protocols, there is a clear need to envision new network management tools, able to sufficiently exploit the peculiarities of WMNs. In particular, a new generation of middleware tools for network monitoring and profiling must be introduced in order to speed up development and testing of novel protocol architectures. Currently, manage-ment functionalities are developed using conventional central-ized approaches. The distributed and self-organizing nature of WMNs suggest a transition from network monitoring to network sensing. In this work, we propose JANUS, a novel framework for distributed monitoring of WMNs. We describe the JANUS architecture, present a possible implementation based on open-source software and report some experimental measurements carried out on a small-scale testbed. Index Terms — wireless mesh networks, network management, distributed hash table, overlay networks, publish-subscribe sys-tems I

    The End of the Canonical IoT Botnet: A Measurement Study of Mirai's Descendants

    Full text link
    Since the burgeoning days of IoT, Mirai has been established as the canonical IoT botnet. Not long after the public release of its code, researchers found many Mirai variants compete with one another for many of the same vulnerable hosts. Over time, the myriad Mirai variants evolved to incorporate unique vulnerabilities, defenses, and regional concentrations. In this paper, we ask: have Mirai variants evolved to the point that they are fundamentally distinct? We answer this question by measuring two of the most popular Mirai descendants: Hajime and Mozi. To actively scan both botnets simultaneously, we developed a robust measurement infrastructure, BMS, and ran it for more than eight months. The resulting datasets show that these two popular botnets have diverged in their evolutions from their common ancestor in multiple ways: they have virtually no overlapping IP addresses, they exhibit different behavior to network events such as diurnal rate limiting in China, and more. Collectively, our results show that there is no longer one canonical IoT botnet. We discuss the implications of this finding for researchers and practitioners

    Using current uptime to improve failure detection in peer-to-peer networks

    Get PDF
    Peer-to-Peer (P2P) networks share computer resources or services through the exchange of information between participating nodes. These nodes form a virtual network overlay by creating a number of connections with one another. Due to the transient nature of nodes within these systems any connection formed should be monitored and maintained to ensure the routing table is kept up-to-date. Typically P2P networks predefine a fixed keep-alive period, a maximum interval in which connected nodes must exchange messages. If no other message has been sent within this interval then keep-alive messages are exchanged to ensure the corresponding node has not left the system. A fixed periodic interval can be viewed as a centralised, static and deterministic mechanism; maintaining overlays in an predictable, reliable and non-adaptive fashion. Several studies have shown that older peers are more likely to remain in the network longer than their short-lived counterparts. Therefore using the distribution of peer session times and the current age of peers as key attributes, we propose three algorithms which allow connections to extend the interval between successive keep-alive messages based upon the likelihood that a corresponding node will remain in the system. By prioritising keep-alive messages to nodes that are more likely to fail, our algorithms reduce the expected delay between failures occurring and their subsequent detection. Using extensively empirical analysis, we analyse the properties of these algorithms and compare them to the standard periodic approach in unstructured and structured network topologies, using tracedriven simulations based upon measured network data. Furthermore we also investigate the effect of nodes that misreport their age upon our adaptive algorithms and detail an efficient keep-alive algorithm that can adapt to the limitations network address translation devices

    Tracking and Mitigation of Malicious Remote Control Networks

    Full text link
    Attacks against end-users are one of the negative side effects of today’s networks. The goal of the attacker is to compromise the victim’s machine and obtain control over it. This machine is then used to carry out denial-of-service attacks, to send out spam mails, or for other nefarious purposes. From an attacker’s point of view, this kind of attack is even more efficient if she manages to compromise a large number of machines in parallel. In order to control all these machines, she establishes a "malicious remote control network", i.e., a mechanism that enables an attacker the control over a large number of compromised machines for illicit activities. The most common type of these networks observed so far are so called "botnets". Since these networks are one of the main factors behind current abuses on the Internet, we need to find novel approaches to stop them in an automated and efficient way. In this thesis we focus on this open problem and propose a general root cause methodology to stop malicious remote control networks. The basic idea of our method consists of three steps. In the first step, we use "honeypots" to collect information. A honeypot is an information system resource whose value lies in unauthorized or illicit use of that resource. This technique enables us to study current attacks on the Internet and we can for example capture samples of autonomous spreading malware ("malicious software") in an automated way. We analyze the collected data to extract information about the remote control mechanism in an automated fashion. For example, we utilize an automated binary analysis tool to find the Command & Control (C&C) server that is used to send commands to the infected machines. In the second step, we use the extracted information to infiltrate the malicious remote control networks. This can for example be implemented by impersonating as a bot and infiltrating the remote control channel. Finally, in the third step we use the information collected during the infiltration phase to mitigate the network, e.g., by shutting down the remote control channel such that the attacker cannot send commands to the compromised machines. In this thesis we show the practical feasibility of this method. We examine different kinds of malicious remote control networks and discuss how we can track all of them in an automated way. As a first example, we study botnets that use a central C&C server: We illustrate how the three steps can be implemented in practice and present empirical measurement results obtained on the Internet. Second, we investigate botnets that use a peer-to-peer based communication channel. Mitigating these botnets is harder since no central C&C server exists which could be taken offline. Nevertheless, our methodology can also be applied to this kind of networks and we present empirical measurement results substantiating our method. Third, we study fast-flux service networks. The idea behind these networks is that the attacker does not directly abuse the compromised machines, but uses them to establish a proxy network on top of these machines to enable a robust hosting infrastructure. Our method can be applied to this novel kind of malicious remote control networks and we present empirical results supporting this claim. We anticipate that the methodology proposed in this thesis can also be used to track and mitigate other kinds of malicious remote control networks
    corecore