599 research outputs found

    Design and optimization of optical grids and clouds

    Get PDF

    Joint dimensioning of server and network infrastructure for resilient optical grids/clouds

    Get PDF
    We address the dimensioning of infrastructure, comprising both network and server resources, for large-scale decentralized distributed systems such as grids or clouds. We design the resulting grid/cloud to be resilient against network link or server failures. To this end, we exploit relocation: Under failure conditions, a grid job or cloud virtual machine may be served at an alternate destination (i.e., different from the one under failure-free conditions). We thus consider grid/cloud requests to have a known origin, but assume a degree of freedom as to where they end up being served, which is the case for grid applications of the bag-of-tasks (BoT) type or hosted virtual machines in the cloud case. We present a generic methodology based on integer linear programming (ILP) that: 1) chooses a given number of sites in a given network topology where to install server infrastructure; and 2) determines the amount of both network and server capacity to cater for both the failure-free scenario and failures of links or nodes. For the latter, we consider either failure-independent (FID) or failure-dependent (FD) recovery. Case studies on European-scale networks show that relocation allows considerable reduction of the total amount of network and server resources, especially in sparse topologies and for higher numbers of server sites. Adopting a failure-dependent backup routing strategy does lead to lower resource dimensions, but only when we adopt relocation (especially for a high number of server sites): Without exploiting relocation, potential savings of FD versus FID are not meaningful

    Joint Power-Efficient Traffic Shaping and Service Provisioning for Metro Elastic Optical Networks

    Get PDF
    Considering the time-averaged behavior of a metro elastic optical network, we develop a joint procedure for resource allocation and traffic shaping to exploit the inherent service diversity among the requests for power-efficient network operation. To support the quality of service diversity, we consider minimum transmission rate, average transmission rate, maximum burst size, and average transmission delay as the adjustable parameters of a general service profile. The work evolves from a stochastic optimization problem, which minimizes the power consumption subject to stability, physical, and service constraints. The optimal solution of the problem is obtained using a complex dynamic programming method. To provide a near-optimal fast-achievable solution, we propose a sequential heuristic with a scalable and causal software implementation, according to the basic Lyapunov iterations of an integer linear program. The heuristic method has a negligible optimality gap and a considerably shorter runtime compared to the optimal dynamic programming, and reduces the consumed power by 72% for an offered traffic with a unit variation coefficient. The adjustable trade-offs of the proposed scheme offer a typical 10% power saving for an acceptable amount of excess transmission delay or drop rate

    Auto-scaling techniques for cloud-based Complex Event Processing

    Get PDF
    One key topic in cloud computing is elasticity, which is the ability of the cloud environment to timely adapt the resource assignment along with the workload demand. According to cloud on-demand model, the infrastructure should be able to scale up and down to unpredictable workloads, in order to achieve both a guaranteed service level and cost efficiency. This work addresses the cloud elasticity problem, with particular reference to the Complex Event Processing (CEP) systems. CEP systems are designed to process large volumes of event-driven data streams and continuously provide results with a low latency and in real-time. CEP systems need to adapt to changing query and events loads. Because of the high computational requirements and varying loads, CEP are distributed system and running on cloud infrastructures. In this work we review the cloud computing auto-scaling solutions, and study their suit- ability in the CEP model. We implement some solutions in a CEP prototype and evaluate the experimental results

    A collaborative statistical actor-critic learning approach for 6G network slicing control

    Get PDF
    Artificial intelligence (AI)-driven zero-touch massive network slicing is envisioned to be a disruptive technology in beyond 5G (B5G)/6G, where tenancy would be extended to the final consumer in the form of advanced digital use-cases. In this paper, we propose a novel model-free deep reinforcement learning (DRL) framework, called collaborative statistical Actor-Critic (CS-AC) that enables a scalable and farsighted slice performance management in a 6G-like RAN scenario that is built upon mobile edge computing (MEC) and massive multiple-input multiple-output (mMIMO). In this intent, the proposed CS-AC targets the optimization of the latency cost under a long-term statistical service-level agreement (SLA). In particular, we consider the Q-th delay percentile SLA metric and enforce some slice-specific preset constraints on it. Moreover, to implement distributed learners, we propose a developed variant of soft Actor-Critic (SAC) with less hyperparameter sensitivity. Finally, we present numerical results to showcase the gain of the adopted approach on our built OpenAI-based network slicing environment and verify the performance in terms of latency, SLA Q-th percentile, and time efficiency. To the best of our knowledge, this is the first work that studies the feasibility of an AI-driven approach for massive network slicing under statistical SLA.This work has been supported in part by the research projects MonB5G (871780), ZEROTO6G, AGAUR (2017-SGR-891), and PROGRESSUS (876868).Peer ReviewedPostprint (author's final draft

    Software Defined Networking:Applicability and Service Possibilities

    Get PDF

    Contribution à la convergence d'infrastructure entre le calcul haute performance et le traitement de données à large échelle

    Get PDF
    The amount of produced data, either in the scientific community or the commercialworld, is constantly growing. The field of Big Data has emerged to handle largeamounts of data on distributed computing infrastructures. High-Performance Computing (HPC) infrastructures are traditionally used for the execution of computeintensive workloads. However, the HPC community is also facing an increasingneed to process large amounts of data derived from high definition sensors andlarge physics apparati. The convergence of the two fields -HPC and Big Data- iscurrently taking place. In fact, the HPC community already uses Big Data tools,which are not always integrated correctly, especially at the level of the file systemand the Resource and Job Management System (RJMS).In order to understand how we can leverage HPC clusters for Big Data usage, andwhat are the challenges for the HPC infrastructures, we have studied multipleaspects of the convergence: We initially provide a survey on the software provisioning methods, with a focus on data-intensive applications. We contribute a newRJMS collaboration technique called BeBiDa which is based on 50 lines of codewhereas similar solutions use at least 1000 times more. We evaluate this mechanism on real conditions and in simulated environment with our simulator Batsim.Furthermore, we provide extensions to Batsim to support I/O, and showcase thedevelopments of a generic file system model along with a Big Data applicationmodel. This allows us to complement BeBiDa real conditions experiments withsimulations while enabling us to study file system dimensioning and trade-offs.All the experiments and analysis of this work have been done with reproducibilityin mind. Based on this experience, we propose to integrate the developmentworkflow and data analysis in the reproducibility mindset, and give feedback onour experiences with a list of best practices.RésuméLa quantité de données produites, que ce soit dans la communauté scientifiqueou commerciale, est en croissance constante. Le domaine du Big Data a émergéface au traitement de grandes quantités de données sur les infrastructures informatiques distribuées. Les infrastructures de calcul haute performance (HPC) sont traditionnellement utilisées pour l’exécution de charges de travail intensives en calcul. Cependant, la communauté HPC fait également face à un nombre croissant debesoin de traitement de grandes quantités de données dérivées de capteurs hautedéfinition et de grands appareils physique. La convergence des deux domaines-HPC et Big Data- est en cours. En fait, la communauté HPC utilise déjà des outilsBig Data, qui ne sont pas toujours correctement intégrés, en particulier au niveaudu système de fichiers ainsi que du système de gestion des ressources (RJMS).Afin de comprendre comment nous pouvons tirer parti des clusters HPC pourl’utilisation du Big Data, et quels sont les défis pour les infrastructures HPC, nousavons étudié plusieurs aspects de la convergence: nous avons d’abord proposé uneétude sur les méthodes de provisionnement logiciel, en mettant l’accent sur lesapplications utilisant beaucoup de données. Nous contribuons a l’état de l’art avecune nouvelle technique de collaboration entre RJMS appelée BeBiDa basée sur 50lignes de code alors que des solutions similaires en utilisent au moins 1000 fois plus.Nous évaluons ce mécanisme en conditions réelles et en environnement simuléavec notre simulateur Batsim. En outre, nous fournissons des extensions à Batsimpour prendre en charge les entrées/sorties et présentons le développements d’unmodèle de système de fichiers générique accompagné d’un modèle d’applicationBig Data. Cela nous permet de compléter les expériences en conditions réellesde BeBiDa en simulation tout en étudiant le dimensionnement et les différentscompromis autours des systèmes de fichiers.Toutes les expériences et analyses de ce travail ont été effectuées avec la reproductibilité à l’esprit. Sur la base de cette expérience, nous proposons d’intégrerle flux de travail du développement et de l’analyse des données dans l’esprit dela reproductibilité, et de donner un retour sur nos expériences avec une liste debonnes pratiques

    Proceedings of the Second International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2015) Krakow, Poland

    Get PDF
    Proceedings of: Second International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2015). Krakow (Poland), September 10-11, 2015
    • …
    corecore