7,550 research outputs found

    The Two-Step P2P Simulation Approach

    Get PDF
    In this article a framework is introduced that can be used to analyse the effects & requirements of P2P applications on application and on network layer. P2P applications are complex and deployed on a large scale, pure packet level simulations do not scale well enough to analyse P2P applications in a large network with thousands of peers. It is also difficult to assess the effect of application level behavior on the communication system. We therefore propose an approach starting with a more abstract and therefore scalable application level simulation. For the application layer a specific simulation framework was developed. The results of the application layer simulations plus some estimated background traffic are fed into a packet layer simulator like NS2 (or our lab testbed) in a second step to perform some detailed packet layer analysis such as loss and delay measurements. This can be done for a subnetwork of the original network to avoid scalability problems

    Using GENI for experimental evaluation of Software Defined Networking in smart grids

    Get PDF
    The North American Electric Reliability Corporation (NERC) envisions a smart grid that aggressively explores advance communication network solutions to facilitate real-time monitoring and dynamic control of the bulk electric power system. At the distribution level, the smart grid integrates renewable generation and energy storage mechanisms to improve the reliability of the grid. Furthermore, dynamic pricing and demand management provide customers an avenue to interact with the power system to determine the electricity usage that best satisfies their lifestyle. At the transmission level, efficient communication and a highly automated architecture provide visibility in the power system and as a result, faults are mitigated faster than they can propagate. However, such higher levels of reliability and efficiency rest on the supporting communication infrastructure. To date, utility companies are moving towards Multiprotocol Label Switching (MPLS) because it supports traffic engineering and virtual private networks (VPNs). Furthermore, it provides Quality of Service (QoS) guarantees and fail-over mechanisms in addition to meeting the requirement of non-routability as stipulated by NERC. However, these benefits come at a cost for the infrastructure that supports the fullMPLS specification. With this realization and given a two week implementation and deployment window in GENI, we explore the modularity and flexibility provided by the low cost OpenFlow Software Defined Networking (SDN) solution. In particular, we use OpenFlow to provide 1.) automatic fail-over mechanisms, 2.) a load balancing, and 3.) Quality of Service guarantees: all essential mechanisms for smart grid networks

    Doctor of Philosophy

    Get PDF
    dissertationNetwork emulation has become an indispensable tool for the conduct of research in networking and distributed systems. It offers more realism than simulation and more control and repeatability than experimentation on a live network. However, emulation testbeds face a number of challenges, most prominently realism and scale. Because emulation allows the creation of arbitrary networks exhibiting a wide range of conditions, there is no guarantee that emulated topologies reflect real networks; the burden of selecting parameters to create a realistic environment is on the experimenter. While there are a number of techniques for measuring the end-to-end properties of real networks, directly importing such properties into an emulation has been a challenge. Similarly, while there exist numerous models for creating realistic network topologies, the lack of addresses on these generated topologies has been a barrier to using them in emulators. Once an experimenter obtains a suitable topology, that topology must be mapped onto the physical resources of the testbed so that it can be instantiated. A number of restrictions make this an interesting problem: testbeds typically have heterogeneous hardware, scarce resources which must be conserved, and bottlenecks that must not be overused. User requests for particular types of nodes or links must also be met. In light of these constraints, the network testbed mapping problem is NP-hard. Though the complexity of the problem increases rapidly with the size of the experimenter's topology and the size of the physical network, the runtime of the mapper must not; long mapping times can hinder the usability of the testbed. This dissertation makes three contributions towards improving realism and scale in emulation testbeds. First, it meets the need for realistic network conditions by creating Flexlab, a hybrid environment that couples an emulation testbed with a live-network testbed, inheriting strengths from each. Second, it attends to the need for realistic topologies by presenting a set of algorithms for automatically annotating generated topologies with realistic IP addresses. Third, it presents a mapper, assign, that is capable of assigning experimenters' requested topologies to testbeds' physical resources in a manner that scales well enough to handle large environments

    Serving Embedded Content via Web Applications: Model, Design and Experimentation

    Get PDF
    International audienceEmbedded systems such as smart cards or sensors are now widespread, but are often closed systems, only accessed via dedicated terminals. A new trend consists in embedding Web servers in small devices, making both access and application development easier. In this paper, we propose a TCP performance model in the context of embedded Web servers, and we introduce a taxonomy of the contents possibly served by Web applications. The main idea of this paper is to adapt the communication stack behavior to application contents properties. We propose a strategies set fitting with each type of content. The model allows to evaluate the benefits of our strategies in terms of time and memory charge. By implementing a real use case on a smart card, we measure the benefits of our proposals and validate our model. Our prototype, called Smews, makes a gap with state of the art solutions both in terms of performance and memory charge

    Enabling stream processing for people-centric IoT based on the fog computing paradigm

    Get PDF
    The world of machine-to-machine (M2M) communication is gradually moving from vertical single purpose solutions to multi-purpose and collaborative applications interacting across industry verticals, organizations and people - A world of Internet of Things (IoT). The dominant approach for delivering IoT applications relies on the development of cloud-based IoT platforms that collect all the data generated by the sensing elements and centrally process the information to create real business value. In this paper, we present a system that follows the Fog Computing paradigm where the sensor resources, as well as the intermediate layers between embedded devices and cloud computing datacenters, participate by providing computational, storage, and control. We discuss the design aspects of our system and present a pilot deployment for the evaluating the performance in a real-world environment. Our findings indicate that Fog Computing can address the ever-increasing amount of data that is inherent in an IoT world by effective communication among all elements of the architecture

    Leveraging Conventional Internet Routing Protocol Behavior to Defeat DDoS and Adverse Networking Conditions

    Get PDF
    The Internet is a cornerstone of modern society. Yet increasingly devastating attacks against the Internet threaten to undermine the Internet\u27s success at connecting the unconnected. Of all the adversarial campaigns waged against the Internet and the organizations that rely on it, distributed denial of service, or DDoS, tops the list of the most volatile attacks. In recent years, DDoS attacks have been responsible for large swaths of the Internet blacking out, while other attacks have completely overwhelmed key Internet services and websites. Core to the Internet\u27s functionality is the way in which traffic on the Internet gets from one destination to another. The set of rules, or protocol, that defines the way traffic travels the Internet is known as the Border Gateway Protocol, or BGP, the de facto routing protocol on the Internet. Advanced adversaries often target the most used portions of the Internet by flooding the routes benign traffic takes with malicious traffic designed to cause widespread traffic loss to targeted end users and regions. This dissertation focuses on examining the following thesis statement. Rather than seek to redefine the way the Internet works to combat advanced DDoS attacks, we can leverage conventional Internet routing behavior to mitigate modern distributed denial of service attacks. The research in this work breaks down into a single arc with three independent, but connected thrusts, which demonstrate that the aforementioned thesis is possible, practical, and useful. The first thrust demonstrates that this thesis is possible by building and evaluating Nyx, a system that can protect Internet networks from DDoS using BGP, without an Internet redesign and without cooperation from other networks. This work reveals that Nyx is effective in simulation for protecting Internet networks and end users from the impact of devastating DDoS. The second thrust examines the real-world practicality of Nyx, as well as other systems which rely on real-world BGP behavior. Through a comprehensive set of real-world Internet routing experiments, this second thrust confirms that Nyx works effectively in practice beyond simulation as well as revealing novel insights about the effectiveness of other Internet security defensive and offensive systems. We then follow these experiments by re-evaluating Nyx under the real-world routing constraints we discovered. The third thrust explores the usefulness of Nyx for mitigating DDoS against a crucial industry sector, power generation, by exposing the latent vulnerability of the U.S. power grid to DDoS and how a system such as Nyx can protect electric power utilities. This final thrust finds that the current set of exposed U.S. power facilities are widely vulnerable to DDoS that could induce blackouts, and that Nyx can be leveraged to reduce the impact of these targeted DDoS attacks

    Foundations of Infrastructure CPS

    Get PDF
    Infrastructures have been around as long as urban centers, supporting a society’s needs for its planning, operation, and safety. As we move deeper into the 21st century, these infrastructures are becoming smart – they monitor themselves, communicate, and most importantly self-govern, which we denote as Infrastructure CPS. Cyber-physical systems are now becoming increasingly prevalent and possibly even mainstream. With the basics of CPS in place, such as stability, robustness, and reliability properties at a systems level, and hybrid, switched, and eventtriggered properties at a network level, we believe that the time is right to go to the next step, Infrastructure CPS, which forms the focus of the proposed tutorial. We discuss three different foundations, (i) Human Empowerment, (ii) Transactive Control, and (iii) Resilience. This will be followed by two examples, one on the nexus between power and communication infrastructure, and the other between natural gas and electricity, both of which have been investigated extensively of late, and are emerging to be apt illustrations of Infrastructure CPS
    • …
    corecore