58,708 research outputs found

    Component-aware Orchestration of Cloud-based Enterprise Applications, from TOSCA to Docker and Kubernetes

    Full text link
    Enterprise IT is currently facing the challenge of coordinating the management of complex, multi-component applications across heterogeneous cloud platforms. Containers and container orchestrators provide a valuable solution to deploy multi-component applications over cloud platforms, by coupling the lifecycle of each application component to that of its hosting container. We hereby propose a solution for going beyond such a coupling, based on the OASIS standard TOSCA and on Docker. We indeed propose a novel approach for deploying multi-component applications on top of existing container orchestrators, which allows to manage each component independently from the container used to run it. We also present prototype tools implementing our approach, and we show how we effectively exploited them to carry out a concrete case study

    Twelve Ways to Build CMS Crossings from ROOT Files

    Full text link
    The simulation of CMS raw data requires the random selection of one hundred and fifty pileup events from a very large set of files, to be superimposed in memory to the signal event. The use of ROOT I/O for that purpose is quite unusual: the events are not read sequentially but pseudo-randomly, they are not processed one by one in memory but by bunches, and they do not contain orthodox ROOT objects but many foreign objects and templates. In this context, we have compared the performance of ROOT containers versus the STL vectors, and the use of trees versus a direct storage of containers. The strategy with best performances is by far the one using clones within trees, but it stays hard to tune and very dependant on the exact use-case. The use of STL vectors could bring more easily similar performances in a future ROOT release.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 8 pages, LaTeX, 1 eps figures. PSN TUKT00

    TrIMS: Transparent and Isolated Model Sharing for Low Latency Deep LearningInference in Function as a Service Environments

    Full text link
    Deep neural networks (DNNs) have become core computation components within low latency Function as a Service (FaaS) prediction pipelines: including image recognition, object detection, natural language processing, speech synthesis, and personalized recommendation pipelines. Cloud computing, as the de-facto backbone of modern computing infrastructure for both enterprise and consumer applications, has to be able to handle user-defined pipelines of diverse DNN inference workloads while maintaining isolation and latency guarantees, and minimizing resource waste. The current solution for guaranteeing isolation within FaaS is suboptimal -- suffering from "cold start" latency. A major cause of such inefficiency is the need to move large amount of model data within and across servers. We propose TrIMS as a novel solution to address these issues. Our proposed solution consists of a persistent model store across the GPU, CPU, local storage, and cloud storage hierarchy, an efficient resource management layer that provides isolation, and a succinct set of application APIs and container technologies for easy and transparent integration with FaaS, Deep Learning (DL) frameworks, and user code. We demonstrate our solution by interfacing TrIMS with the Apache MXNet framework and demonstrate up to 24x speedup in latency for image classification models and up to 210x speedup for large models. We achieve up to 8x system throughput improvement.Comment: In Proceedings CLOUD 201
    • …
    corecore