407 research outputs found

    Achieving end-to-end fairness in 802.11e based wireless multi-hop mesh networks

    Get PDF
    To mitigate the damaging impacts caused by interference and hidden terminals, it has proposed to use orthogonal channels in multi-hop wireless mesh networks. We demonstrate however that even if these issues are completely eliminated with perfectly assigned channels, gross unfairness can still exist amongst competing flows which traverse multiple hops. We propose the use of 802.lle's TXOP mechanism to restore/enforce fairness. The proposed scheme is simple, implementable using off-the-shelf devices and fully decentralised (requires no message passing)

    Improving Performance for CSMA/CA Based Wireless Networks

    Get PDF
    Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) based wireless networks are becoming increasingly ubiquitous. With the aim of supporting rich multimedia applications such as high-definition television (HDTV, 20Mbps) and DVD (9.8Mbps), one of the technology trends is towards increasingly higher bandwidth. Some recent IEEE 802.11n proposals seek to provide PHY rates of up to 600 Mbps. In addition to increasing bandwidth, there is also strong interest in extending the coverage of CSMA/CA based wireless networks. One solution is to relay traffic via multiple intermediate stations if the sender and the receiver are far apart. The so called “mesh” networks based on this relay-based approach, if properly designed, may feature both “high speed” and “large coverage” at the same time. This thesis focusses on MAC layer performance enhancements in CSMA/CA based networks in this context. Firstly, we observe that higher PHY rates do not necessarily translate into corresponding increases in MAC layer throughput due to the overhead of the CSMA/CA based MAC/PHY layers. To mitigate the overhead, we propose a novel MAC scheme whereby transported information is partially acknowledged and retransmitted. Theoretical analysis and extensive simulations show that the proposed MAC approach can achieve high efficiency (low MAC overhead) for a wide range of channel variations and realistic traffic types. Secondly, we investigate the close interaction between the MAC layer and the buffer above it to improve performance for real world traffic such as TCP. Surprisingly, the issue of buffer sizing in 802.11 wireless networks has received little attention in the literature yet it poses fundamentally new challenges compared to buffer sizing in wired networks. We propose a new adaptive buffer sizing approach for 802.11e WLANs that maintains a high level of link utilisation, while minimising queueing delay. Thirdly, we highlight that gross unfairness can exist between competing flows in multihop mesh networks even if we assume that orthogonal channels are used in neighbouring hops. That is, even without inter-channel interference and hidden terminals, multi-hop mesh networks which aim to offer a both “high speed” and “large coverage” are not achieved. We propose the use of 802.11e’s TXOP mechanism to restore/enfore fairness. The proposed approach is implementable using off-the-shelf devices and fully decentralised (requires no message passing)

    Achieving end-to-end fairness in 802.11e based wireless multi-hop mesh networks

    Get PDF
    To mitigate the damaging impacts caused by interference and hidden terminals, it has proposed to use orthogonal channels in multi-hop wireless mesh networks. We demonstrate however that even if these issues are completely eliminated with perfectly assigned channels, gross unfairness can still exist amongst competing flows which traverse multiple hops. We propose the use of 802.lle's TXOP mechanism to restore/enforce fairness. The proposed scheme is simple, implementable using off-the-shelf devices and fully decentralised (requires no message passing)

    Tailbench: a benchmark suite and evaluation methodology for latency-critical applications

    Get PDF
    Latency-critical applications, common in datacenters, must achieve small and predictable tail (e.g., 95th or 99th percentile) latencies. Their strict performance requirements limit utilization and efficiency in current datacenters. These problems have sparked research in hardware and software techniques that target tail latency. However, research in this area is hampered by the lack of a comprehensive suite of latency-critical benchmarks. We present TailBench, a benchmark suite and evaluation methodology that makes latency-critical workloads as easy to run and characterize as conventional, throughput-oriented ones. TailBench includes eight applications that span a wide range of latency requirements and domains, and a harness that implements a robust and statistically sound load-testing methodology. The modular design of the TailBench harness facilitates multiple load-testing scenarios, ranging from multi-node configurations that capture network overheads, to simplified single-node configurations that allow measuring tail latency in simulation. Validation results show that the simplified configurations are accurate for most applications. This flexibility enables rapid prototyping of hardware and software techniques for latency-critical workloads.National Science Foundation (U.S.) (CCF-1318384)Qatar Computing Research InstituteGoogle (Firm) (Google Research Award

    Distributed services across the network from edge to core

    Get PDF
    The current internet architecture is evolving from a simple carrier of bits to a platform able to provide multiple complex services running across the entire Network Service Provider (NSP) infrastructure. This calls for increased flexibility in resource management and allocation to provide dedicated, on-demand network services, leveraging a distributed infrastructure consisting of heterogeneous devices. More specifically, NSPs rely on a plethora of low-cost Customer Premise Equipment (CPE), as well as more powerful appliances at the edge of the network and in dedicated data-centers. Currently a great research effort is spent to provide this flexibility through Fog computing, Network Functions Virtualization (NFV), and data plane programmability. Fog computing or Edge computing extends the compute and storage capabilities to the edge of the network, closer to the rapidly growing number of connected devices and applications that consume cloud services and generate massive amounts of data. A complementary technology is NFV, a network architecture concept targeting the execution of software Network Functions (NFs) in isolated Virtual Machines (VMs), potentially sharing a pool of general-purpose hosts, rather than running on dedicated hardware (i.e., appliances). Such a solution enables virtual network appliances (i.e., VMs executing network functions) to be provisioned, allocated a different amount of resources, and possibly moved across data centers in little time, which is key in ensuring that the network can keep up with the flexibility in the provisioning and deployment of virtual hosts in today’s virtualized data centers. Moreover, recent advances in networking hardware have introduced new programmable network devices that can efficiently execute complex operations at line rate. As a result, NFs can be (partially or entirely) folded into the network, speeding up the execution of distributed services. The work described in this Ph.D. thesis aims at showing how various network services can be deployed throughout the NSP infrastructure, accommodating to the different hardware capabilities of various appliances, by applying and extending the above-mentioned solutions. First, we consider a data center environment and the deployment of (virtualized) NFs. In this scenario, we introduce a novel methodology for the modelization of different NFs aimed at estimating their performance on different execution platforms. Moreover, we propose to extend the traditional NFV deployment outside of the data center to leverage the entire NSP infrastructure. This can be achieved by integrating native NFs, commonly available in low-cost CPEs, with an existing NFV framework. This facilitates the provision of services that require NFs close to the end user (e.g., IPsec terminator). On the other hand, resource-hungry virtualized NFs are run in the NSP data center, where they can take advantage of the superior computing and storage capabilities. As an application, we also present a novel technique to deploy a distributed service, specifically a web filter, to leverage both the low latency of a CPE and the computational power of a data center. We then show that also the core network, today dedicated solely to packet routing, can be exploited to provide useful services. In particular, we propose a novel method to provide distributed network services in core network devices by means of task distribution and a seamless coordination among the peers involved. The aim is to transform existing network nodes (e.g., routers, switches, access points) into a highly distributed data acquisition and processing platform, which will significantly reduce the storage requirements at the Network Operations Center and the packet duplication overhead. Finally, we propose to use new programmable network devices in data center networks to provide much needed services to distributed applications. By offloading part of the computation directly to the networking hardware, we show that it is possible to reduce both the network traffic and the overall job completion time

    Deflection Routing Strategies for Optical Burst Switching Networks: Contemporary Affirmation of the Recent Literature

    Get PDF
    A promising option to raising busty interchange in system communication could be Optical Burst Switched (OBS) networks among scalable and support routing effective. The routing schemes with disputation resolution got much interest, because the OBS network is buffer less in character. Because the deflection steering can use limited optical buffering or actually no buffering thus the choice or deflection routing techniques can be critical. Within this paper we investigate the affirmation of the current literature on alternate (deflection) routing strategies accessible for OBS networks
    • …
    corecore