4 research outputs found

    Composable architecture for rack scale big data computing

    No full text
    The rapid growth of cloud computing, both in terms of the spectrum and volume of cloud workloads, necessitate re-visiting the traditional rack-mountable servers based datacenter design. Next generation datacenters need to offer enhanced support for: (i) fast changing system configuration requirements due to workload constraints, (ii) timely adoption of emerging hardware technologies, and (iii) maximal sharing of systems and subsystems in order to lower costs. Disaggregated datacenters, constructed as a collection of individual resources such as CPU, memory, disks etc., and composed into workload execution units on demand, are an interesting new trend that can address the above challenges. In this paper, we demonstrated the feasibility of composable systems through building a rack scale composable system prototype using PCIe switch. Through empirical approaches, we develop assessment of the opportunities and challenges for leveraging the composable architecture for rack scale cloud datacenters with a focus on big data and NoSQL workloads. In particular, we compare and contrast the programming models that can be used to access the composable resources, and developed the implications for the network and resource provisioning and management for rack scale architecture

    Estado del arte de los hipervisores para FPGAs y sus interferencias en la comunicaci贸n

    Get PDF
    Resumen Con el aumento exponencial de las necesidades tanto de usuarios como de empresas de aumentar la velocidad de comunicaci贸n de sus sitemas, los desarrolladores se han visto en la obligaci贸n de usar cada vez m谩s FPGAs. Sin embargo, las FPGAs resultan ser un elemento muy caro y potente al que, en muchas ocasiones, no se llega a sacar el 100 % de la utilidad, de modo que, con el objetivo de conseguir un ahorro econ贸mico (y ya que las mismas lo permiten) las FPGAs se han empezado a virtualizar. La virtualizaci贸n es una tecnolog铆a bastante conocida, pues se emplea tan- to en ordenadores de escritorio como en servidores, y esta misma tecnolog铆a se est谩 trasladando ahora a arquitecturas ARM, haciendo posible la insta- laci贸n de hipervisores en FPGAs. En este trabajo se realizar谩 un estudio sobre los diversos hipervisores disponibles actualmente en el mercado para, seguidamente, realizar la puesta en marcha de un hipervisor XNG sobre una ZYBO y una evaluaci贸n de las velocidades en sus comunicaciones de red. La primera parte, recogida en los cap铆tulos 5 a 8, es un estado del arte centrado en los virtualizadores que incluye una clasificaci贸n de los mismos, junto con una descripci贸n de los distintos hipervisores y su correspondiente clasificaci贸n. La segunda parte, englobada en los cap铆tulos 9 a 10, contiene una breve descripci贸n de algunos de los hipervisores que pueden ser usados en arqui- tecturas ARM. La tercera y 煤ltima parte de este trabajo se encuentra en el cap铆tulo 11 y describe, primeramente, c贸mo instalar Linux en una tarjeta de desarrollo ZYBO y, en segundo lugar, c贸mo instalar el hipervisor XNG y una partici贸n de Linux en la misma placa ZYBO. En este apartado tambi茅n se presentan las mediciones de las velocidades de comunicaci贸n para cada uno de los supuestos y se comparan los resultados obtenidos.Laburpena Erabiltzaileek eta enpresek beren sistemen komunikazio-abiadura han- ditzeko dituzten beharrak esponentzialki handitu direnez, garatzaileek gero eta FPGA gehiago erabili behar izan dituzte. Hala ere, FPGAk oso elementu garesti eta indartsuak dira, eta, askotan, ez zaie %100 baliagarritasuna ateratzen; beraz, aurrezki ekonomikoa lortzeko helburuarekin (eta FPGA-ek ahalbidetzen dutenez), FPGAk birtualizatzen hasi dira. Birtualizazioa teknologia nahiko ezaguna da, mahaigaineko orde- nagailuetan zein zerbitzarietan erabili ohi izan dena, eta egun teknologia hau ARM arkitekturetara eramaten hasi da, FPGAn hiperbisoreak instalatzeko aukera emanez. Lan honetan, gaur egun merkatuan dauden hiperbisoreei buruzko ikerketa bat aurkeztuko da, eta, jarraian, ZYBO baten gainean XNG hiperbisore bat martxan jarriko da sareko komunikazioetako abiaduren ebaluazioa egiteko. Lehenengo atala, 5. kapitulutik 8.era , birtualizatzaileetan zentratutako artearen egoera bat da, hauen sailkapen batekin batera, eta hiperbisore ez- berdinen deskribapenak ere jasotzen ditu, dagokien sailkapenarekin. Bigarren atalean, 9. eta 10. kapituluetan, ARM arkitekturetan erabil dai- tezkeen hiperbisoreetako batzuen deskribapen laburra jasotzen da. Bukatzeko, lan honen hirugarren eta azken atala 11. kapituluan dago, eta, lehenik, Linux ZYBO garapen-txartel batean nola instalatzen den deskri- batzen du, eta, bigarrenik, XNG hiperbisorea eta Linux partizio bat ZYBO plaka berean nola instalatu azaltzen du. Atal honetan, kasu bakoitzerako komunikazio-abiaduren neurketak ere aurkezten dira, eta lortutako emaitzak alderatzen dira.Abstract With the exponential rise in the needs of both users and companies to increase the communication speed of their systems, developers have been forced to use more and more FPGAs. However, FPGAs turn out to be a very expensive and powerful element that, on many occasions, is not used to its full potential, so, in order to achieve economic savings (and since they allow it), FPGAs have begun to be virtualized. The virtualization is a well known technology, commonly used in both desktop computers and servers, whitch is now being transferred to ARM architectures making it possible to install hypervisors in FPGAs. In this paper, a study of the various hypervisors currently available on the market will be carried out, followed by the implementation of an XNG hypervisor on a ZYBO and an evaluation of its network communication speeds. The first part, contained in chapters 5 to 8, shows the state of the art mainly focused on virtualizers, including a classification of them, together with a description of the different hypervisors and their corresponding clas- sification. The second part, comprising chapters 9 to 10, contains a brief description of some of the hypervisors that can be used on ARM architectures. The third and last part of this work is found in chapter 11 and describes, firstly, how to install Linux on a ZYBO development board and, secondly, how to install the XNG hypervisor and a Linux partition on the same ZYBO board. This section also presents the measurements of the communication speeds for each of the assumptions and compares the results obtained

    PCIe Device Lending

    Get PDF
    We have developed a proof of concept for allowing a PCI Express device attached to one computer to be used by another computer without any software intermediate on the data path. The device driver runs on a physically separate machine from the device, but our implementation allows the device driver and device to communicate as if the device and driver were in the same machine, without modifying either the driver or the device. The kernel and higher level software can utilize the device as if it were a local device. A device will not be used by two separate machines at the same time, but a machine can transfer the control of a local device to a remote machine. We have named this concept "device lending". We envision that machines will have, in addition to local PCIe devices, access to a pool of remote PCIe devices. When a machine needs more device resources, additional devices can be dynamically borrowed from other machines with devices to spare. These devices can be located in a dedicated external cabinet, or be devices inserted into internal slots in a normal computer. The device lending is implemented using a Non-Transparent Bridge (NTB), a native PCIe interconnect that should offer performance close to that of a locally connected device. Devices that are not currently being lent to another host will not be affected in any way. NTBs are available as add-ons for any PCIe based computer and are included in newer Intel Xeon CPUs. The proof of concept we created was implemented for Linux, on top of the APIs provided by our NTB vendor, Dolphin. The host borrowing a device has a kernel module to provide the necessary software support and the other host has a user space daemon. No additional software modifications or hardware is required, nor special support from the devices. The current implementation works with some devices, but has some problems with others. We believe however, that we have identified the problems and how to improve the situation. In a later implementation, we believe that all devices we have tested can be made to work correctly and with very high performance
    corecore