857 research outputs found

    Energy Efficiency of P2P and Distributed Clouds Networks

    Get PDF
    Since its inception, the Internet witnessed two major approaches to communicate digital content to end users: peer to peer (P2P) and client/server (C/S) networks. Both approaches require high bandwidth and low latency physical underlying networks to meet the users’ escalating demands. Network operators typically have to overprovision their systems to guarantee acceptable quality of service (QoS) and availability while delivering content. However, more physical devices led to more ICT power consumption over the years. An effective approach to confront these challenges is to jointly optimise the energy consumption of content providers and transportation networks. This thesis proposes a number of energy efficient mechanisms to optimise BitTorrent based P2P networks and clouds based C/S content distribution over IP/WDM based core optical networks. For P2P systems, a mixed integer linear programming (MILP) optimisation, two heuristics and an experimental testbed are developed to minimise the power consumption of IP/WDM networks that deliver traffic generated by an overlay layer of homogeneous BitTorrent users. The approach optimises peers’ selection where the goal is to minimise IP/WDM network power consumption while maximising peers download rate. The results are compared to typical C/S systems. We also considered Heterogeneous BitTorrent peers and developed models that optimise P2P systems to compensate for different peers behaviour after finishing downloading. We investigated the impact of core network physical topology on the energy efficiency of BitTorrent systems. We also investigated the power consumption of Video on Demand (VoD) services using CDN, P2P and hybrid CDN-P2P architectures over IP/WDM networks and addressed content providers efforts to balance the load among their data centres. For cloud systems, a MILP and a heuristic were developed to minimise content delivery induced power consumption of both clouds and IP/WDM networks. This was done by optimally determining the number, location and internal capability in terms of servers, LAN and storage of each cloud, subject to daily traffic variation. Different replication schemes were studied revealing that replicating content into multiple clouds based on content popularity is the optimum approach with respect to energy. The model was extended to study Storage as a Service (StaaS). We also studied the problem of virtual machine placement in IP/WDM networks and showed that VM Slicing is the best approach compared to migration and replication schemes to minimise energy. Finally, we have investigated the utilisation of renewable energy sources represented by solar cells and wind farms in BitTorrent networks and content delivery clouds, respectively. Comprehensive modelling and simulation as well as experimental demonstration were developed, leading to key contributions in the field of energy efficient telecommunications

    Green Vehicular Content Distribution Network

    Get PDF
    With environmental awareness becoming a global concern, content distribution has become popular in the context of modern city scenario with obvious concerns for ICT power consumption. The business world demands huge amounts of information exchange for advertisement and connectivity, which is an integral part of a smart city. In this thesis, a number of energy saving and performance improvement techniques are proposed for the content delivery scenario. These are: content cache location optimisation techniques for energy saving and transceiver load adaptive techniques that save energy while maintaining acceptable piece delay. With the recent advancement in Fog computing, nano-servers are introduced in the later part of the thesis for content delivery and process of user demands. Two techniques random sleep cycles and rate adaptation are proposed to save transmission energy. The quality of service in terms of piece delay and dropping probability are optimised by deploying renewable and non-renewable energy powered nano-servers (NS). Finally, mixed integer linear programming models (MILP) were developed alongside other optimisations methods like bisection, greedy and genetic algorithms which judiciously distribute renewable energy to the fog servers in order to minimise the piece delay and dropping probability in heavily loaded regions of the city area

    The EVIDENCE project: Measure no.21 - Bike sharing

    Get PDF
    Evidence project Measure review on Bike Sharin

    Life science payloads planning study

    Get PDF
    Preferred approaches and procedures were defined for integrating the space shuttle life sciences payload from experiment solicitation through final data dissemination at mission completion. The payloads operations plan was refined and expended to include current information. The NASA-JSC facility accommodations were assessed, and modifications recommended to improve payload processing capability. Standard format worksheets were developed to permit rapid location of experiment requirements and a Spacelab mission handbook was developed to assist potential life sciences investigators at academic, industrial, health research, and NASA centers. Practical, cost effective methods were determined for accommodating various categories of live specimens during all mission phases

    Optimising Networks For Ultra-High Definition Video

    Get PDF
    The increase in real-time ultra-high definition video services is a challenging issue for current network infrastructures. The high bitrate traffic generated by ultra-high definition content reduces the effectiveness of current live video distribution systems. Transcoders and application layer multicasting (ALM) can reduce traffic in a video delivery system, but they are limited due to the static nature of their implementations. To overcome the restrictions of current static video delivery systems, an OpenFlow based migration system is proposed. This system enables an almost seamless migration of a transcoder or ALM node, while delivering real-time ultra-high definition content. Further to this, a novel heuristic algorithm is presented to optimise control of the migration events and destination. The combination of the migration system and heuristic algorithm provides an improved video delivery system, capable of migrating resources during operation with minimal disruption to clients. With the rise in popularity of consumer based live streaming, it is necessary to develop and improve architectures that can support these new types of applications. Current architectures introduce a large delay to video streams, which presents issues for certain applications. In order to overcome this, an improved infrastructure for delivering real-time streams is also presented. The proposed system uses OpenFlow within a content delivery network (CDN) architecture, in order to improve several aspects of current CDNs. Aside from the reduction in stream delay, other improvements include switch level multicasting to reduce duplicate traffic and smart load balancing for server resources. Furthermore, a novel max-flow algorithm is also presented. This algorithm aims to optimise traffic within a system such as the proposed OpenFlow CDN, with the focus on distributing traffic across the network, in order to reduce the probability of blocking

    Segment Routing: a Comprehensive Survey of Research Activities, Standardization Efforts and Implementation Results

    Full text link
    Fixed and mobile telecom operators, enterprise network operators and cloud providers strive to face the challenging demands coming from the evolution of IP networks (e.g. huge bandwidth requirements, integration of billions of devices and millions of services in the cloud). Proposed in the early 2010s, Segment Routing (SR) architecture helps face these challenging demands, and it is currently being adopted and deployed. SR architecture is based on the concept of source routing and has interesting scalability properties, as it dramatically reduces the amount of state information to be configured in the core nodes to support complex services. SR architecture was first implemented with the MPLS dataplane and then, quite recently, with the IPv6 dataplane (SRv6). IPv6 SR architecture (SRv6) has been extended from the simple steering of packets across nodes to a general network programming approach, making it very suitable for use cases such as Service Function Chaining and Network Function Virtualization. In this paper we present a tutorial and a comprehensive survey on SR technology, analyzing standardization efforts, patents, research activities and implementation results. We start with an introduction on the motivations for Segment Routing and an overview of its evolution and standardization. Then, we provide a tutorial on Segment Routing technology, with a focus on the novel SRv6 solution. We discuss the standardization efforts and the patents providing details on the most important documents and mentioning other ongoing activities. We then thoroughly analyze research activities according to a taxonomy. We have identified 8 main categories during our analysis of the current state of play: Monitoring, Traffic Engineering, Failure Recovery, Centrally Controlled Architectures, Path Encoding, Network Programming, Performance Evaluation and Miscellaneous...Comment: SUBMITTED TO IEEE COMMUNICATIONS SURVEYS & TUTORIAL
    • …
    corecore