469 research outputs found

    SoC-Cluster as an Edge Server: an Application-driven Measurement Study

    Full text link
    Huge electricity consumption is a severe issue for edge data centers. To this end, we propose a new form of edge server, namely SoC-Cluster, that orchestrates many low-power mobile system-on-chips (SoCs) through an on-chip network. For the first time, we have developed a concrete SoC-Cluster server that consists of 60 Qualcomm Snapdragon 865 SoCs in a 2U rack. Such a server has been commercialized successfully and deployed in large scale on edge clouds. The current dominant workload on those deployed SoC-Clusters is cloud gaming, as mobile SoCs can seamlessly run native mobile games. The primary goal of this work is to demystify whether SoC-Cluster can efficiently serve more general-purpose, edge-typical workloads. Therefore, we built a benchmark suite that leverages state-of-the-art libraries for two killer edge workloads, i.e., video transcoding and deep learning inference. The benchmark comprehensively reports the performance, power consumption, and other application-specific metrics. We then performed a thorough measurement study and directly compared SoC-Cluster with traditional edge servers (with Intel CPU and NVIDIA GPU) with respect to physical size, electricity, and billing. The results reveal the advantages of SoC-Cluster, especially its high energy efficiency and the ability to proportionally scale energy consumption with various incoming loads, as well as its limitations. The results also provide insightful implications and valuable guidance to further improve SoC-Cluster and land it in broader edge scenarios

    Guided Transcoding for Next-Generation Video Coding (HEVC)

    Get PDF
    Video content is the dominant traffic type on mobile networks today and this portion is only expected to increase in the future. In this thesis we investigate ways of reducing bit rates for adaptive streaming applications in the latest video coding standard, H.265 / High Efficiency Video Coding (HEVC). The current models for offering different-resolution versions of video content in a dynamic way, so called adaptive streaming, require either large amounts of storage capacity where full encodings of the material is kept at all times, or extremely high computational power in order to regenerate content on-demand. Guided transcoding aims at finding a middle-ground were we can store and transmit less data, at full or near-full quality, while still keeping computational complexity low. This is achieved by shifting the computationally heavy operations to a preprocessing step where so called side-information is generated. The side-information can then be used to quickly reconstruct sequences on-demand -- even when running on generic, non-specialized, hardware. Two method for generating side-information, pruning and deflation, are compared on a varying set of standardized HEVC test sequences and the respective upsides and downsides of each method are discussed.Genom att slänga bort viss information från en komprimerad video och sedan återskapa sekvensen i realtid kan vi minska behovet av lagringsutrymme för adaptiv videostreaming med 20–30%. Detta med helt bibehållen bildkvalité eller endast små försämringar. ==================== Adaptiv streaming Streaming är ett populärt sätt att skicka video över internet där en sekvens delas upp i korta segment som skickas kontinuerligt till användaren. Dessa segment kan skickas med varierande kvalité, och en modell där vi automatiskt känner av nätverkets belastning och dynamiskt anpassar kvalitén kallas för adaptiv streaming. Detta är ett system som används av SVT Play, TV4 Play och YouTube. HD- eller UltraHD-video måste komprimeras för att kunna skickas över ett nätverk – den tar helt enkelt för stor plats annars. Video som kodas med den senaste komprimeringsstandarden, HEVC/H.265, blir upp emot 700 gånger mindre med minimala försämringar av bildkvalitén. Ett segment på tio sekunder som tar 1,5 GB att skicka i rå form kan då komprimeras till strax över 2 MB. För att kunna erbjuda tittaren en videosekvens – en film eller ett TV-program – i varierande kvalité, skapar man olika kodningar av materialet. Generellt har vi inte möjlighet att förändra kvalitén på en sekvens i efterhand – omkodning av även en kort HD-video tar timmar att genomföra – så för att adaptiv streaming ska kunna fungera i praktiken genereras alla versioner på förhand och sparas undan. Men detta kräver stort lagringsutrymme. Guided transcoding Guided transcoding (”guidad omkodning”) erbjuder ett sätt att minska behovet av lagringsutrymme genom att slänga bort viss information och sedan återskapa den vid behov i ett senare skede. Vi gör detta för varje sekvens av lägre kvalité, men behåller högsta kvalitén som den är. En stympad lågkvalité-video tillsammans med videon av högsta kvalitén kan sedan användas för att exakt återskapa sekvensen. Denna process är mycket snabb i jämförelse med vanlig omkodning, så vi kan med kort varsel generera videokodningar av varierande kvalité. Vi har undersökt två metoder för plocka bort och återskapa videoinformation: pruning och deflation. Den första ger små försämringar i bildkvalitén och sparar närmare 30% lagringsutrymme. Den senare har ingen påverkan på bildkvalitén men sparar bara drygt 20% i utrymme

    Virtual Multicast

    Get PDF

    On the Load Balancing of Edge Computing Resources for On-Line Video Delivery

    Get PDF
    Online video broadcasting platforms are distributed, complex, cloud oriented, scalable, micro-service-based systems that are intended to provide over-the-top and live content to audience in scattered geographic locations. Due to the nature of cloud VM hosting costs, the subscribers are usually served under limited resources in order to minimize delivery budget. However, operations including transcoding require high-computational capacity and any disturbance in supplying requested demand might result in quality of experience (QoE) deterioration. For any online delivery deployment, understanding user's QoE plays a crucial role for rebalancing cloud resources. In this paper, a methodology for estimating QoE is provided for a scalable cloud-based online video platform. The model will provide an adeptness guideline regarding limited cloud resources and relate computational capacity, memory, transcoding and throughput capability, and finally latency competence of the cloud service to QoE. Scalability and efficiency of the system are optimized through reckoning sufficient number of VMs and containers to satisfy the user requests even on peak demand durations with minimum number of VMs. Both horizontal and vertical scaling strategies (including VM migration) are modeled to cover up availability and reliability of intermediate and edge content delivery network cache nodes

    Exploring manycore architectures for next-generation HPC systems through the MANGO approach

    Full text link
    [EN] The Horizon 2020 MANGO project aims at exploring deeply heterogeneous accelerators for use in High-Performance Computing systems running multiple applications with different Quality of Service (QoS) levels. The main goal of the project is to exploit customization to adapt computing resources to reach the desired QoS. For this purpose, it explores different but interrelated mechanisms across the architecture and system software. In particular, in this paper we focus on the runtime resource management, the thermal management, and support provided for parallel programming, as well as introducing three applications on which the project foreground will be validated.This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 671668.Flich Cardo, J.; Agosta, G.; Ampletzer, P.; Atienza-Alonso, D.; Brandolese, C.; Cappe, E.; Cilardo, A.... (2018). Exploring manycore architectures for next-generation HPC systems through the MANGO approach. Microprocessors and Microsystems. 61:154-170. https://doi.org/10.1016/j.micpro.2018.05.011S1541706
    corecore