178 research outputs found

    Towards Terabit Carrier Ethernet and Energy Efficient Optical Transport Networks

    Get PDF

    Radio Resource Management Optimization For Next Generation Wireless Networks

    Get PDF
    The prominent versatility of today’s mobile broadband services and the rapid advancements in the cellular phones industry have led to a tremendous expansion in the wireless market volume. Despite the continuous progress in the radio-access technologies to cope with that expansion, many challenges still remain that need to be addressed by both the research and industrial sectors. One of the many remaining challenges is the efficient allocation and management of wireless network resources when using the latest cellular radio technologies (e.g., 4G). The importance of the problem stems from the scarcity of the wireless spectral resources, the large number of users sharing these resources, the dynamic behavior of generated traffic, and the stochastic nature of wireless channels. These limitations are further tightened as the provider’s commitment to high quality-of-service (QoS) levels especially data rate, delay and delay jitter besides the system’s spectral and energy efficiencies. In this dissertation, we strive to solve this problem by presenting novel cross-layer resource allocation schemes to address the efficient utilization of available resources versus QoS challenges using various optimization techniques. The main objective of this dissertation is to propose a new predictive resource allocation methodology using an agile ray tracing (RT) channel prediction approach. It is divided into two parts. The first part deals with the theoretical and implementational aspects of the ray tracing prediction model, and its validation. In the second part, a novel RT-based scheduling system within the evolving cloud radio access network (C-RAN) architecture is proposed. The impact of the proposed model on addressing the long term evolution (LTE) network limitations is then rigorously investigated in the form of optimization problems. The main contributions of this dissertation encompass the design of several heuristic solutions based on our novel RT-based scheduling model, developed to meet the aforementioned objectives while considering the co-existing limitations in the context of LTE networks. Both analytical and numerical methods are used within this thesis framework. Theoretical results are validated with numerical simulations. The obtained results demonstrate the effectiveness of our proposed solutions to meet the objectives subject to limitations and constraints compared to other published works

    System architecture and hardware implementations for a reconfigurable MPLS router

    Get PDF
    With extremely wide bandwidth and good channel properties, optical fibers have brought fast and reliable data transmission to today’s data communications. However, to handle heavy traffic flowing through optical physical links, much faster processing speed is required or else congestion can take place at network nodes. Also, to provide people with voice, data and all categories of multimedia services, distinguishing between different data flows is a requirement. To address these router performance, Quality of Service /Class of Service and traffic engineering issues, Multi-Protocol Label Switching (MPLS) was proposed for IP-based Internetworks. In addition, routers flexible in hardware architecture in order to support ever-evolving protocols and services without causing big infrastructure modification or replacement are also desirable. Therefore, reconfigurable hardware implementation of MPLS was proposed in this project to obtain the overall fast processing speed at network nodes. The long-term goal of this project is to develop a reconfigurable MPLS router, which uniquely integrates the best features of operations being conducted in software and in run-time-reconfigurable hardware. The scope of this thesis includes system architecture and service algorithm considerations, Verilog coding and testing for an actual device. The hardware and software co-design technique was used to partition and schedule the protocol code for execution on both a general-purpose processor and stream-based hardware. A novel RPS scheme that is practically easy to build and can realize pipelined packet-by-packet data transfer at each output was proposed to take the place of the traditional crossbar switching. In RPS, packets with variable lengths can be switched intelligently without performing packet segmentation and reassembly. Primary theoretical analysis of queuing issues was discussed and an improved multiple queue service scheduling policy UD-WRR was proposed, which can reduce packet-waiting time without sacrificing the performance. In order to have the tests carried out appropriately, dedicated circuitry for the MPLS functional block to interface a specific MAC chip was implemented as well. The hardware designs for all functions were realized with a single Field Programmable Gate Array (FPGA) device in this project. The main result presented in this thesis was the MPLS function implementation realizing a major part of layer three routing at the reconfigurable hardware level, which advanced a great step towards the goal of building a router that is both fast and flexible

    Towards Internet QoS Provisioning Based on Generic Distributed QoS Adaptive Routing Engine

    Get PDF
    Increasing efficiency and quality demands of modern Internet technologies drive today’s network engineers to seek to provide quality of service (QoS). Internet QoS provisioning gives rise to several challenging issues. This paper introduces a generic distributed QoS adaptive routing engine (DQARE) architecture based on OSPFxQoS. The innovation of the proposed work in this paper is its undependability on the used QoS architectures and, moreover, splitting of the control strategy from data forwarding mechanisms, so we guarantee a set of absolute stable mechanisms on top of which Internet QoS can be built. DQARE architecture is furnished with three relevant traffic control schemes, namely, service differentiation, QoS routing, and traffic engineering. The main objective of this paper is to (i) provide a general configuration guideline for service differentiation, (ii) formalize the theoretical properties of different QoS routing algorithms and then introduce a QoS routing algorithm (QOPRA) based on dynamic programming technique, and (iii) propose QoS multipath forwarding (QMPF) model for paths diversity exploitation. NS2-based simulations proved the DQARE superiority in terms of delay, packet delivery ratio, throughput, and control overhead. Moreover, extensive simulations are used to compare the proposed QOPRA algorithm and QMPF model with their counterparts in the literature

    Simulation and Performance Evaluation of Hadoop Capacity Scheduler

    Get PDF
    MapReduce is a parallel programming paradigm used for processing huge datasets on certain classes of distributable problems using a cluster. Budgetary constraints and the need for better usage of resources in a MapReduce cluster often make organizations rent or share hardware resources for their main data processing and analysis tasks. Thus, there may be many competing jobs from different clients performing simultaneous requests to the MapReduce framework on a particular cluster. Schedulers like Fair Share and Capacity have been specially designed for such purposes. Administrators and users run into performance problems, however, because they do not know the exact meaning of different task scheduler settings and what impact they can have with respect to the resource allocation scheme across organizations for a shared MapReduce cluster. In this work, Capacity Scheduler is integrated into an existing MRPERF simulator to predict the performance of MapReduce jobs in a shared cluster under different settings for Capacity Scheduler. A few case studies on the behaviour of Capacity Scheduler across different job patterns etc. using integrated simulator are also conducted

    Systems and Methods for Measuring and Improving End-User Application Performance on Mobile Devices

    Full text link
    In today's rapidly growing smartphone society, the time users are spending on their smartphones is continuing to grow and mobile applications are becoming the primary medium for providing services and content to users. With such fast paced growth in smart-phone usage, cellular carriers and internet service providers continuously upgrade their infrastructure to the latest technologies and expand their capacities to improve the performance and reliability of their network and to satisfy exploding user demand for mobile data. On the other side of the spectrum, content providers and e-commerce companies adopt the latest protocols and techniques to provide smooth and feature-rich user experiences on their applications. To ensure a good quality of experience, monitoring how applications perform on users' devices is necessary. Often, network and content providers lack such visibility into the end-user application performance. In this dissertation, we demonstrate that having visibility into the end-user perceived performance, through system design for efficient and coordinated active and passive measurements of end-user application and network performance, is crucial for detecting, diagnosing, and addressing performance problems on mobile devices. My dissertation consists of three projects to support this statement. First, to provide such continuous monitoring on smartphones with constrained resources that operate in such a highly dynamic mobile environment, we devise efficient, adaptive, and coordinated systems, as a platform, for active and passive measurements of end-user performance. Second, using this platform and other passive data collection techniques, we conduct an in-depth user trial of mobile multipath to understand how Multipath TCP (MPTCP) performs in practice. Our measurement study reveals several limitations of MPTCP. Based on the insights gained from our measurement study, we propose two different schemes to address the identified limitations of MPTCP. Last, we show how to provide visibility into the end- user application performance for internet providers and in particular home WiFi routers by passively monitoring users' traffic and utilizing per-app models mapping various network quality of service (QoS) metrics to the application performance.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/146014/1/ashnik_1.pd

    Dynamic service chain composition in virtualised environment

    Get PDF
    Network Function Virtualisation (NFV) has contributed to improving the flexibility of network service provisioning and reducing the time to market of new services. NFV leverages the virtualisation technology to decouple the software implementation of network appliances from the physical devices on which they run. However, with the emergence of this paradigm, providing data centre applications with an adequate network performance becomes challenging. For instance, virtualised environments cause network congestion, decrease the throughput and hurt the end user experience. Moreover, applications usually communicate through multiple sequences of virtual network functions (VNFs), aka service chains, for policy enforcement and performance and security enhancement, which increases the management complexity at to the network level. To address this problematic situation, existing studies have proposed high-level approaches of VNFs chaining and placement that improve service chain performance. They consider the VNFs as homogenous entities regardless of their specific characteristics. They have overlooked their distinct behaviour toward the traffic load and how their underpinning implementation can intervene in defining resource usage. Our research aims at filling this gap by finding out particular patterns on production and widely used VNFs. And proposing a categorisation that helps in reducing network latency at the chains. Based on experimental evaluation, we have classified firewalls, NAT, IDS/IPS, Flow monitors into I/O- and CPU-bound functions. The former category is mainly sensitive to the throughput, in packets per second, while the performance of the latter is primarily affected by the network bandwidth, in bits per second. By doing so, we correlate the VNF category with the traversing traffic characteristics and this will dictate how the service chains would be composed. We propose a heuristic called Natif, for a VNF-Aware VNF insTantIation and traFfic distribution scheme, to reconcile the discrepancy in VNF requirements based on the category they belong to and to eventually reduce network latency. We have deployed Natif in an OpenStack-based environment and have compared it to a network-aware VNF composition approach. Our results show a decrease in latency by around 188% on average without sacrificing the throughput

    Analyse de sécurité et QoS dans les réseaux à contraintes temporelles

    Get PDF
    Dans le domaine des réseaux, deux précieux objectifs doivent être atteints, à savoir la QoS et la sécurité, plus particulièrement lorsqu’il s’agit des réseaux à caractère critique et à fortes contraintes temporelles. Malheureusement, un conflit existe : tandis que la QoS œuvre à réduire les temps de traitement, les mécanismes de sécurité quant à eux requièrent d’importants temps de traitement et causent, par conséquent, des délais et dégradent la QoS. Par ailleurs, les systèmes temps réel, la QoS et la sécurité ont très souvent été étudiés séparément, par des communautés différentes. Dans le contexte des réseaux avioniques de données, de nombreux domaines et applications, de criticités différentes, échangent mutuellement des informations, souvent à travers des passerelles. Il apparaît clairement que ces informations présentent différents niveaux de sensibilité en termes de sécurité et de QoS. Tenant compte de cela, le but de cette thèse est d’accroître la robustesse des futures générations de réseaux avioniques de données en contrant les menaces de sécurité et évitant les ruptures de trafic de données. A cet effet, nous avons réalisé un état de l’art des mécanismes de sécurité, de la QoS et des applications à contraintes temporelles. Nous avons, ensuite étudié la nouvelle génération des réseaux avioniques de données. Chose qui nous a permis de déterminer correctement les différentes menaces de sécurité. Sur la base de cette étude, nous avons identifié à la fois les exigences de sécurité et de QoS de cette nouvelle génération de réseaux avioniques. Afin de les satisfaire, nous avons proposé une architecture de passerelle de sécurité tenant compte de la QoS pour protéger ces réseaux avioniques et assurer une haute disponibilité en faveur des données critiques. Pour assurer l’intégration des différentes composantes de la passerelle, nous avons développé une table de session intégrée permettant de stocker toutes les informations nécessaires relatives aux sessions et d’accélérer les traitements appliqués aux paquets (filtrage à états, les traductions d’adresses NAT, la classification QoS et le routage). Cela a donc nécessité, en premier lieu, l'étude de la structure existante de la table de session puis, en second lieu, la proposition d'une toute nouvelle structure répondant à nos objectifs. Aussi, avons-nous présenté un algorithme permettant l’accès et l’exploitation de la nouvelle table de session intégrée. En ce qui concerne le composant VPN IPSec, nous avons détecté que le trafic chiffré par le protocole ESP d’IPSec ne peut pas être classé correctement par les routeurs de bordure. Afin de surmonter ce problème, nous avons développé un protocole, Q-ESP, permettant la classification des trafics chiffrés et offrant les services de sécurité fournis par les protocoles AH et ESP combinés. Plusieurs techniques de gestion de bande passante ont été développées en vue d’optimiser la gestion du trafic réseau. Pour évaluer les performances offertes par ces techniques et identifier laquelle serait la plus appropriée dans notre cas, nous avons effectué une comparaison basée sur le critère du délai, par le biais de tests expérimentaux. En dernière étape, nous avons évalué et comparé les performances de la passerelle de sécurité que nous proposons par rapport à trois produits commerciaux offrant les fonctions de passerelle de sécurité logicielle en vue de déterminer les points forts et faibles de notre implémentation pour la développer ultérieurement. Le manuscrit s’organise en deux parties : la première est rédigée en français et représente un résumé détaillé de la deuxième partie qui est, quant à elle, rédigée en anglais. ABSTRACT : QoS and security are two precious objectives for network systems to attain, especially for critical networks with temporal constraints. Unfortunately, they often conflict; while QoS tries to minimize the processing delay, strong security protection requires more processing time and causes traffic delay and QoS degradation. Moreover, real-time systems, QoS and security have often been studied separately and by different communities. In the context of the avionic data network various domains and heterogeneous applications with different levels of criticality cooperate for the mutual exchange of information, often through gateways. It is clear that this information has different levels of sensitivity in terms of security and QoS constraints. Given this context, the major goal of this thesis is then to increase the robustness of the next generation e-enabled avionic data network with respect to security threats and ruptures in traffic characteristics. From this perspective, we surveyed the literature to establish state of the art network security, QoS and applications with time constraints. Then, we studied the next generation e-enabled avionic data network. This allowed us to draw a map of the field, and to understand security threats. Based on this study we identified both security and QoS requirements of the next generation e-enabled avionic data network. In order to satisfy these requirements we proposed the architecture of QoS capable integrated security gateway to protect the next generation e-enabled avionic data network and ensure the availability of critical traffic. To provide for a true integration between the different gateway components we built an integrated session table to store all the needed session information and to speed up the packet processing (firewall stateful inspection, NAT mapping, QoS classification and routing). This necessitates the study of the existing session table structure and the proposition of a new structure to fulfill our objective. Also, we present the necessary processing algorithms to access the new integrated session table. In IPSec VPN component we identified the problem that IPSec ESP encrypted traffic cannot be classified appropriately by QoS edge routers. To overcome this problem, we developed a Q-ESP protocol which allows the classifications of encrypted traffic and combines the security services provided by IPSec ESP and AH. To manage the network traffic wisely, a variety of bandwidth management techniques have been developed. To assess their performance and identify which bandwidth management technique is the most suitable given our context we performed a delay-based comparison using experimental tests. In the final stage, we benchmarked our implemented security gateway against three commercially available software gateways. The goal of this benchmark test is to evaluate performance and identify problems for future research work. This dissertation is divided into two parts: in French and in English respectively. Both parts follow the same structure where the first is an extended summary of the second

    Fair, responsive scheduling of engineering workflows on computing grids

    Get PDF
    This thesis considers scheduling in the context of a grid computing system used in engineering design. Users desire responsiveness and fairness in the treatment of the workflows they submit. Submissions outstrip the available computing capacity during the work day, and the queue is only caught up on overnight and at weekends. The execution times observed span a wide range of 10^0 to 10^7 core-minutes. The Projected Schedule Length Ratio (P-SLR) list scheduling policy is designed to use execution time estimates and the structure of the dependency graph to improve on the existing industrial FairShare policy. P-SLR aims to minimise the worst-case SLR of jobs and keep SLR fair across the space of job execution times. P-SLR is shown to equal or surpass all other evaluated policies in responsiveness and fairness across the spectra of load and networking delays. P-SLR is also dominant where execution time estimates are within an order of magnitude of the real value. Such estimates are considered achievable using user knowledge or automated profiling. Outside this range, the Shortest Remaining Time First (SRTF) policy achieved better responsiveness and fairness. The Projected Value Remaining (PVR) policy considers the case where a curve specifying the value of a job over time is given. PVR aims to maximise total workload value, even under overload, by maximising the worst-case job value in a workload. PVR is shown to be dominant across the load and networking spectra. Where execution time estimates are coarser than the nearest power of 2, SRTF delivers higher value than PVR. SRTF is also shown to have responsiveness, fairness and value close behind P-SLR and PVR throughout the range of load and network delays considered. However, the kinds of starvation under overload incurred by SRTF would almost certainly be undesirable if implemented in a production system

    SHARING WITH LIVE MIGRATION ENERGY OPTIMIZATION TASK SCHEDULER FOR CLOUD COMPUTING DATACENTRES

    Get PDF
    The use of cloud computing is expanding, and it is becoming the driver for innovation in all companies to serve their customers around the world. A big attention was drawn to the huge energy that was consumed within those datacentres recently neglecting the energy consumption in the rest of the cloud components. Therefore, the energy consumption should be reduced to minimize performance losses, achieve the target battery lifetime, satisfy performance requirements, minimize power consumption, minimize the CO2 emissions, maximize the profit, and maximize resource utilization. Reducing power consumption in the cloud computing datacentres can be achieved by many ways such as managing or utilizing the resources, controlling redundancy, relocating datacentres, improvement of applications or dynamic voltage and frequency scaling. One of the most efficient ways to reduce power is to use a scheduling technique that will find the best task execution order based on the users demands and with the minimum execution time and cloud resources. It is quite a challenge in cloud environment to design an effective and an efficient task scheduling technique which is done based on the user requirements. The scheduling process is not an easy task because within the datacentre there is dissimilar hardware with different capacities and, to improve the resource utilization, an efficient scheduling algorithm must be applied on the incoming tasks to achieve efficient computing resource allocating and power optimization. The scheduler must maintain the balance between the Quality of Service and fairness among the jobs so that the efficiency may be increased. The aim of this project is to propose a novel method for optimizing energy usage in cloud computing environments that satisfy the Quality of Service (QoS) and the regulations of the Service Level Agreement (SLA). Applying a power- and resource-optimised scheduling algorithm will assist to control and improve the process of mapping between the datacentre servers and the incoming tasks and achieve the optimal deployment of the data centre resources to achieve good computing efficiency, network load minimization and reducing the energy consumption in the datacentre. This thesis explores cloud computing energy aware datacentre structures with diverse scheduling heuristics and propose a novel job scheduling technique with sharing and live migration based on file locality (SLM) aiming to maximize efficiency and save power consumed in the datacentre due to bandwidth usage utilization, minimizing the processing time and the system total make span. The propose SLM energy efficient scheduling strategy have four basic algorithms: 1) Job Classifier, 2) SLM job scheduler, 3) Dual fold VM virtualization and 4) VM threshold margins and consolidation. The SLM job classifier worked on categorising the incoming set of user requests to the datacentre in to two different queues based on these requests type and the source file needed to process them. The processing time of each job fluctuate based on the job type and the number of instructions for each job. The second algorithm, which is the SLM scheduler algorithm, dispatch jobs from both queues according to job arrival time and control the allocation process to the most appropriate and available VM based on job similarity according to a predefined synchronized job characteristic table (SJC). The SLM scheduler uses a replicated host’s infrastructure to save the wasted idle hosts energy by maximizing the basic host’s utilization as long as the system can deal with workflow while setting replicated hosts on off mode. The third SLM algorithm, the dual fold VM algorithm, divide the active VMs in to a top and low level slots to allocate similar jobs concurrently which maximize the host utilization at high workload and reduce the total make span. The VM threshold margins and consolidation algorithm set an upper and lower threshold margin as a trigger for VMs consolidation and load balancing process among running VMs, and deploy a continuous provisioning of overload and underutilize VMs detection scheme to maintain and control the system workload balance. The consolidation and load balancing is achieved by performing a series of dynamic live migrations which provides auto-scaling for the servers with in the datacentres. This thesis begins with cloud computing overview then preview the conceptual cloud resources management strategies with classification of scheduling heuristics. Following this, a Competitive analysis of energy efficient scheduling algorithms and related work is presented. The novel SLM algorithm is proposed and evaluated using the CloudSim toolkit under number of scenarios, then the result compared to Particle Swarm Optimization algorithm (PSO) and Ant Colony Algorithm (ACO) shows a significant improvement in the energy usage readings levels and total make span time which is the total time needed to finish processing all the tasks
    • …
    corecore