70 research outputs found

    Leveraging disaggregated accelerators and non-volatile memories to improve the efficiency of modern datacenters

    Get PDF
    (English) Traditional data centers consist of computing nodes that possess all the resources physically attached. When there was the need to deal with more significant demands, the solution has been to either add more nodes (scaling out) or increase the capacity of existing ones (scaling-up). Workload requirements are traditionally fulfilled by selecting compute platforms from pools that better satisfy their average or maximum resource requirements depending on the price that the user is willing to pay. The amount of processor, memory, storage, and network bandwidth of a selected platform needs to meet or exceed the platform requirements of the workload. Beyond those explicitly required by the workload, additional resources are considered stranded resources (if not used) or bonus resources (if used). Meanwhile, workloads in all market segments have evolved significantly during the last decades. Today, workloads have a larger variety of requirements in terms of characteristics related to the computing platforms. Those workload new requirements include new technologies such as GPU, FPGA, NVMe, etc. These new technologies are more expensive and thus become more limited. It is no longer feasible to increase the number of resources according to potential peak demands, as this significantly raises the total cost of ownership. Software-Defined-Infrastructures (SDI), a new concept for the data center architecture, is being developed to address those issues. The main SDI proposition is to disaggregate all the resources over the fabric to enable the required flexibility. On SDI, instead of pools of computational nodes, the pools consist of individual units of resources (CPU, memory, FPGA, NVMe, GPU, etc.). When an application needs to be executed, SDI identifies the computational requirements and assembles all the resources required, creating a composite node. Resource disaggregation brings new challenges and opportunities that this thesis will explore. This thesis demonstrates that resource disaggregation brings opportunities to increase the efficiency of modern data centers. This thesis demonstrates that resource disaggregation may increase workloads' performance when sharing a single resource. Thus, needing fewer resources to achieve similar results. On the other hand, this thesis demonstrates how through disaggregation, aggregation of resources can be made, increasing a workload's performance. However, to take maximum advantage of those characteristics and flexibility, orchestrators must be aware of them. This thesis demonstrates how workload-aware techniques applied at the resource management level allow for improved quality of service leveraging resource disaggregation. Enabling resource disaggregation, this thesis demonstrates a reduction of up to 49% missed deadlines compared to a traditional schema. This reduction can rise up to 100% when enabling workload awareness. Moreover, this thesis demonstrates that GPU partitioning and disaggregation further enhances the data center flexibility. This increased flexibility can achieve the same results with half the resources. That is, with a single physical GPU partitioned and disaggregated, the same results can be achieved with 2 GPU disaggregated but not partitioned. Finally, this thesis demonstrates that resource fragmentation becomes key when having a limited set of heterogeneous resources, namely NVMe and GPU. For the case of an heterogeneous set of resources, and specifically when some of those resources are highly demanded but limited in quantity. That is, the situation where the demand for a resource is unexpectedly high, this thesis proposes a technique to minimize fragmentation that reduces deadlines missed compared to a disaggregation-aware policy of up to 86%.(Català) Els datacenters tradicionals consisteixen en un seguit de nodes computacionals que contenen al seu interior tots els recursos necessaris. Quan hi ha una necessitat de gestionar demandes superiors la solució era o afegir més nodes (scale-out) o incrementar la capacitat dels existents (scale-up). Els requisits de les aplicacions tradicionalment són satisfets seleccionant recursos de racks que satisfan millor el seu SLA basats o en la mitjana dels requisits o en el màxim possible, en funció del preu que l'usuari estigui disposat a pagar. La quantitat de processadors, memòria, disc, i banda d'ampla d'un rack necessita satisfer o excedir els requisits de l'aplicació. Els recursos addicionals als requerits per les aplicacions són considerats inactius (si no es fan servir) o addicionals (si es fan servir). Per altra banda, les aplicacions en tots els segments de mercat han evolucionat significativament en les últimes dècades. Avui en dia, les aplicacions tenen una gran varietat de requisits en termes de característiques que ha de tenir la infraestructura. Aquests nous requisits inclouen tecnologies com GPU, FPGA, NVMe, etc. Aquestes tecnologies són més cares i, per tant, més limitades. Ja no és factible incrementar el nombre de recursos segons el potencial pic de demanda, ja que això incrementa significativament el cost total de la infraestructura. Software-Defined Infrastructures és un nou concepte per a l'arquitectura de datacenters que s'està desenvolupant per pal·liar aquests problemes. La proposició principal de SDI és desagregar tots els recursos sobre la xarxa per garantir una major flexibilitat. Sota SDI, en comptes de racks de nodes computacionals, els racks consisteix en unitats individuals de recursos (CPU, memòria, FPGA, NVMe, GPU, etc). Quan una aplicació necessita executar, SDI identifica els requisits computacionals i munta una plataforma amb tots els recursos necessaris, creant un node composat. La desagregació de recursos porta nous reptes i oportunitats que s'exploren en aquesta tesi. Aquesta tesi demostra que la desagregació de recursos ens dona l'oportunitat d'incrementar l'eficiència dels datacenters moderns. Aquesta tesi demostra la desagregació pot incrementar el rendiment de les aplicacions. Però per treure el màxim partit a aquestes característiques i d'aquesta flexibilitat, els orquestradors n'han de ser conscient. Aquesta tesi demostra que aplicant tècniques conscients de l'aplicació aplicades a la gestió de recursos permeten millorar la qualitat del servei a través de la desagregació de recursos. Habilitar la desagregació de recursos porta a una reducció de fins al 49% els deadlines perduts comparat a una política tradicional. Aquesta reducció pot incrementar-se fins al 100% quan s'habilita la consciència de l'aplicació. A més a més, aquesta tesi demostra que el particionat de GPU combinat amb la desagregació millora encara més la flexibilitat. Aquesta millora permet aconseguir els mateixos resultats amb la meitat de recursos. És a dir, amb una sola GPU física particionada i desagregada, els mateixos resultats són obtinguts que utilitzant-ne dues desagregades però no particionades. Finalment, aquesta tesi demostra que la gestió de la fragmentació de recursos és una peça clau quan la quantitat de recursos és limitada en un conjunt heterogeni de recursos. Pel cas d'un conjunt heterogeni de recursos, i especialment quan aquests recursos tenen molta demanda però són limitats en quantitat. És a dir, quan la demanda pels recursos és inesperadament alta, aquesta tesi proposa una tècnica minimitzant la fragmentació que redueix els deadlines perduts comparats a una política de desagregació de fins al 86%.Arquitectura de computador

    Real-Time Localization Using Software Defined Radio

    Get PDF
    Service providers make use of cost-effective wireless solutions to identify, localize, and possibly track users using their carried MDs to support added services, such as geo-advertisement, security, and management. Indoor and outdoor hotspot areas play a significant role for such services. However, GPS does not work in many of these areas. To solve this problem, service providers leverage available indoor radio technologies, such as WiFi, GSM, and LTE, to identify and localize users. We focus our research on passive services provided by third parties, which are responsible for (i) data acquisition and (ii) processing, and network-based services, where (i) and (ii) are done inside the serving network. For better understanding of parameters that affect indoor localization, we investigate several factors that affect indoor signal propagation for both Bluetooth and WiFi technologies. For GSM-based passive services, we developed first a data acquisition module: a GSM receiver that can overhear GSM uplink messages transmitted by MDs while being invisible. A set of optimizations were made for the receiver components to support wideband capturing of the GSM spectrum while operating in real-time. Processing the wide-spectrum of the GSM is possible using a proposed distributed processing approach over an IP network. Then, to overcome the lack of information about tracked devices’ radio settings, we developed two novel localization algorithms that rely on proximity-based solutions to estimate in real environments devices’ locations. Given the challenging indoor environment on radio signals, such as NLOS reception and multipath propagation, we developed an original algorithm to detect and remove contaminated radio signals before being fed to the localization algorithm. To improve the localization algorithm, we extended our work with a hybrid based approach that uses both WiFi and GSM interfaces to localize users. For network-based services, we used a software implementation of a LTE base station to develop our algorithms, which characterize the indoor environment before applying the localization algorithm. Experiments were conducted without any special hardware, any prior knowledge of the indoor layout or any offline calibration of the system

    Energy aware network coding in wireless networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 97-104).Energy is one of the most important considerations in designing reliable low-power wireless communication networks. We focus on the problem of energy aware network coding. In particular, we investigate practical energy efficient network code design for wireless body area networks (WBAN). We first consider converge-cast in a star-shaped topology, in which a central base station (BS), or hub, manages and communicates directly with a set of nodes. We then consider a wireless-relay channel, in which a relay node assists in the transmission of data from a source to a destination. This wireless relay channel can be seen as a simplified extended star network, where nodes have relay capabilities. The objective is to investigate the use of network coding in these scenarios, with the goal of achieving reliability under low-energy and lower-power constraints. More specifically, in a star network, we propose a simple network layer protocol, study the mean energy to complete uploads of given packets from the nodes to the BS using a Markov chain model, and show through numerical examples that when reception energy is taken into account, the incorporation of network coding offers reductions in energy use. The amount of achievable gains depends on the number of nodes in the network, the degree of asymmetry in channel conditions experienced by different nodes, and the relative difference between transmitting and receiving power at the nodes. We also demonstrate the compatibility of the proposed scheme with the IEEE 802.15.6 WBAN standard by describing ways of incorporating network coding into systems compliant to the standard. For a wireless relay channel, we explore the strategic use of network coding according to both throughput and energy metrics. In the relay channel, a single source communicates to a single sink through the aid of a half-duplex relay. The fluid flow model is used to describe the case where both the source and the relay are coding, and Markov chain models are proposed to describe packet evolution if only the source or only the relay is coding. Although we do not attempt to explicitly categorize the optimal network coding strategies in the relay channel under different system parameters, we provide a framework for deciding whether and where to code, taking into account of throughput maximization and energy depletion constraints.by Xiaomeng Shi.Ph.D

    Edge Intelligence : Empowering Intelligence to the Edge of Network

    Get PDF
    Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis proximity to where data are captured based on artificial intelligence. Edge intelligence aims at enhancing data processing and protects the privacy and security of the data and users. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this article, we present a thorough and comprehensive survey of the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, i.e., edge caching, edge training, edge inference, and edge offloading based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare, and analyze the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, and so on. This article provides a comprehensive survey of edge intelligence and its application areas. In addition, we summarize the development of the emerging research fields and the current state of the art and discuss the important open issues and possible theoretical and technical directions.Peer reviewe
    corecore