143 research outputs found

    Performance assessment of 40 Gbit/s off-the-shelf network cards for virtual network probes in 5G networks

    Full text link
    Incoming 5G networks will evolve regarding how they operate due to the use of virtualization technologies. Network functions that are necessary for communication will be virtual and will run on top of commodity servers. Among these functions, it will be essential to deploy monitoring probes, which will provide information regarding how the network is behaving, which will be later analyzed for self-management purposes. However, to date, the network probes have needed to be physical to perform at link-rates in high-speed networks, and it is challenging to deploy them in virtual environments. Thus, it will be necessary to rely on bare-metal accelerators to deal with existing input/output (I/O) performance problems. Next, to control the costs of implementing these virtual network probes, our approach is to leverage the capabilities that current commercial off-the-shelf network cards provide for virtual environments. Specifically, to this end, we have implemented HPCAP40vf, which is a driver that is GPL-licensed and available for download, for network capture in virtual machines. This driver handles the communication with an Intel XL710 40 Gbit/s commercial network card to enable a network monitoring application run within a virtual machine. To store the captured traffic, we have relied on NVMe drives due to their high transference rate, as they are directly connected to the PCIe bus. We have assessed the performance of this approach and compared it with DPDK, in terms of both capturing and storing the network traffic by measuring the achieved data rates. The evaluation has taken into account two virtualization technologies, namely, KVM and Docker, and two access methods to the underlying hardware, namely, VirtIO and PCI passthrough. With this methodology, we have identified bottlenecks and determined the optimal solution in each case to reduce overheads due to virtualization. This approach can also be applied to the development of other performance-hungry virtual network functions. The obtained results demonstrate the feasibility of our proposed approach: when we correctly use the capabilities that current commercial network cards provide, our virtual network probe can monitor at 40 Gbit/s with full packet capture and storage and simultaneously track the traffic among other virtual network functions inside the host and with the external networkThis work has been partially supported by the Spanish Ministry of Economy and Competitiveness and the European Regional Development Fund under the project TRÁFICA (MINECO/FEDER TEC2015-69417-C2-1-R),and by the European Commission under the project H2020METRO-HAUL (Project ID:761727

    High Temperature Electronics Design for Aero Engine Controls and Health Monitoring

    Get PDF
    There is a growing desire to install electronic power and control systems in high temperature harsh environments to improve the accuracy of critical measurements, reduce the amount of cabling and to eliminate cooling systems. Typical target applications include electronics for energy exploration, power generation and control systems. Technical topics presented in this book include:• High temperature electronics market• High temperature devices, materials and assembly processes• Design, manufacture and testing of multi-sensor data acquisition system for aero-engine control• Future applications for high temperature electronicsHigh Temperature Electronics Design for Aero Engine Controls and Health Monitoring contains details of state of the art design and manufacture of electronics targeted towards a high temperature aero-engine application. High Temperature Electronics Design for Aero Engine Controls and Health Monitoring is ideal for design, manufacturing and test personnel in the aerospace and other harsh environment industries as well as academic staff and master/research students in electronics engineering, materials science and aerospace engineering

    High Temperature Electronics Design for Aero Engine Controls and Health Monitoring

    Get PDF
    There is a growing desire to install electronic power and control systems in high temperature harsh environments to improve the accuracy of critical measurements, reduce the amount of cabling and to eliminate cooling systems. Typical target applications include electronics for energy exploration, power generation and control systems. Technical topics presented in this book include:• High temperature electronics market• High temperature devices, materials and assembly processes• Design, manufacture and testing of multi-sensor data acquisition system for aero-engine control• Future applications for high temperature electronicsHigh Temperature Electronics Design for Aero Engine Controls and Health Monitoring contains details of state of the art design and manufacture of electronics targeted towards a high temperature aero-engine application. High Temperature Electronics Design for Aero Engine Controls and Health Monitoring is ideal for design, manufacturing and test personnel in the aerospace and other harsh environment industries as well as academic staff and master/research students in electronics engineering, materials science and aerospace engineering

    3D flash memory cube design utilizing COTS for space

    Get PDF
    With the rapid growth in scope for space missions, the computing capabilities of on-board spacecraft is becoming a major limiting factor for future missions. This thesis is part of a project that designs a radiation hardened NAND Flash memory cube for use in space. The main focus of this thesis is on presenting a new NAND Flash memory cube design with features to allow the cube to be used in space. Another goal of the thesis is to show proof of concept of the cube. Thus, a preliminary RTL was designed for the memory controllerportion of the design and simulated results are shown in the later part of the thesis.M.S

    The DeepHealth Toolkit: A key European free and open-source software for deep learning and computer vision ready to exploit heterogeneous HPC and cloud architectures

    Get PDF
    At the present time, we are immersed in the convergence between Big Data, High-Performance Computing and Artificial Intelligence. Technological progress in these three areas has accelerated in recent years, forcing different players like software companies and stakeholders to move quickly. The European Union is dedicating a lot of resources to maintain its relevant position in this scenario, funding projects to implement large-scale pilot testbeds that combine the latest advances in Artificial Intelligence, High-Performance Computing, Cloud and Big Data technologies. The DeepHealth project is an example focused on the health sector whose main outcome is the DeepHealth toolkit, a European unified framework that offers deep learning and computer vision capabilities, completely adapted to exploit underlying heterogeneous High-Performance Computing, Big Data and cloud architectures, and ready to be integrated into any software platform to facilitate the development and deployment of new applications for specific problems in any sector. This toolkit is intended to be one of the European contributions to the field of AI. This chapter introduces the toolkit with its main components and complementary tools, providing a clear view to facilitate and encourage its adoption and wide use by the European community of developers of AI-based solutions and data scientists working in the healthcare sector and others. iThis chapter describes work undertaken in the context of the DeepHealth project, “Deep-Learning and HPC to Boost Biomedical Applications for Health”, which has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 825111.Peer Reviewed"Article signat per 19 autors/es: Marco Aldinucci, David Atienza, Federico Bolelli, Mónica Caballero, Iacopo Colonnelli, José Flich, Jon A. Gómez, David González, Costantino Grana, Marco Grangetto, Simone Leo, Pedro López, Dana Oniga, Roberto Paredes, Luca Pireddu, Eduardo Quiñones, Tatiana Silva, Enzo Tartaglione & Marina Zapater "Postprint (author's final draft

    Harnessing low-level tuning in modern architectures for high-performance network monitoring in physical and virtual platforms

    Full text link
    Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Tecnología Electrónica y de las Comunicaciones. Fecha de lectura: 02-07-201

    Leveraging disaggregated accelerators and non-volatile memories to improve the efficiency of modern datacenters

    Get PDF
    (English) Traditional data centers consist of computing nodes that possess all the resources physically attached. When there was the need to deal with more significant demands, the solution has been to either add more nodes (scaling out) or increase the capacity of existing ones (scaling-up). Workload requirements are traditionally fulfilled by selecting compute platforms from pools that better satisfy their average or maximum resource requirements depending on the price that the user is willing to pay. The amount of processor, memory, storage, and network bandwidth of a selected platform needs to meet or exceed the platform requirements of the workload. Beyond those explicitly required by the workload, additional resources are considered stranded resources (if not used) or bonus resources (if used). Meanwhile, workloads in all market segments have evolved significantly during the last decades. Today, workloads have a larger variety of requirements in terms of characteristics related to the computing platforms. Those workload new requirements include new technologies such as GPU, FPGA, NVMe, etc. These new technologies are more expensive and thus become more limited. It is no longer feasible to increase the number of resources according to potential peak demands, as this significantly raises the total cost of ownership. Software-Defined-Infrastructures (SDI), a new concept for the data center architecture, is being developed to address those issues. The main SDI proposition is to disaggregate all the resources over the fabric to enable the required flexibility. On SDI, instead of pools of computational nodes, the pools consist of individual units of resources (CPU, memory, FPGA, NVMe, GPU, etc.). When an application needs to be executed, SDI identifies the computational requirements and assembles all the resources required, creating a composite node. Resource disaggregation brings new challenges and opportunities that this thesis will explore. This thesis demonstrates that resource disaggregation brings opportunities to increase the efficiency of modern data centers. This thesis demonstrates that resource disaggregation may increase workloads' performance when sharing a single resource. Thus, needing fewer resources to achieve similar results. On the other hand, this thesis demonstrates how through disaggregation, aggregation of resources can be made, increasing a workload's performance. However, to take maximum advantage of those characteristics and flexibility, orchestrators must be aware of them. This thesis demonstrates how workload-aware techniques applied at the resource management level allow for improved quality of service leveraging resource disaggregation. Enabling resource disaggregation, this thesis demonstrates a reduction of up to 49% missed deadlines compared to a traditional schema. This reduction can rise up to 100% when enabling workload awareness. Moreover, this thesis demonstrates that GPU partitioning and disaggregation further enhances the data center flexibility. This increased flexibility can achieve the same results with half the resources. That is, with a single physical GPU partitioned and disaggregated, the same results can be achieved with 2 GPU disaggregated but not partitioned. Finally, this thesis demonstrates that resource fragmentation becomes key when having a limited set of heterogeneous resources, namely NVMe and GPU. For the case of an heterogeneous set of resources, and specifically when some of those resources are highly demanded but limited in quantity. That is, the situation where the demand for a resource is unexpectedly high, this thesis proposes a technique to minimize fragmentation that reduces deadlines missed compared to a disaggregation-aware policy of up to 86%.(Català) Els datacenters tradicionals consisteixen en un seguit de nodes computacionals que contenen al seu interior tots els recursos necessaris. Quan hi ha una necessitat de gestionar demandes superiors la solució era o afegir més nodes (scale-out) o incrementar la capacitat dels existents (scale-up). Els requisits de les aplicacions tradicionalment són satisfets seleccionant recursos de racks que satisfan millor el seu SLA basats o en la mitjana dels requisits o en el màxim possible, en funció del preu que l'usuari estigui disposat a pagar. La quantitat de processadors, memòria, disc, i banda d'ampla d'un rack necessita satisfer o excedir els requisits de l'aplicació. Els recursos addicionals als requerits per les aplicacions són considerats inactius (si no es fan servir) o addicionals (si es fan servir). Per altra banda, les aplicacions en tots els segments de mercat han evolucionat significativament en les últimes dècades. Avui en dia, les aplicacions tenen una gran varietat de requisits en termes de característiques que ha de tenir la infraestructura. Aquests nous requisits inclouen tecnologies com GPU, FPGA, NVMe, etc. Aquestes tecnologies són més cares i, per tant, més limitades. Ja no és factible incrementar el nombre de recursos segons el potencial pic de demanda, ja que això incrementa significativament el cost total de la infraestructura. Software-Defined Infrastructures és un nou concepte per a l'arquitectura de datacenters que s'està desenvolupant per pal·liar aquests problemes. La proposició principal de SDI és desagregar tots els recursos sobre la xarxa per garantir una major flexibilitat. Sota SDI, en comptes de racks de nodes computacionals, els racks consisteix en unitats individuals de recursos (CPU, memòria, FPGA, NVMe, GPU, etc). Quan una aplicació necessita executar, SDI identifica els requisits computacionals i munta una plataforma amb tots els recursos necessaris, creant un node composat. La desagregació de recursos porta nous reptes i oportunitats que s'exploren en aquesta tesi. Aquesta tesi demostra que la desagregació de recursos ens dona l'oportunitat d'incrementar l'eficiència dels datacenters moderns. Aquesta tesi demostra la desagregació pot incrementar el rendiment de les aplicacions. Però per treure el màxim partit a aquestes característiques i d'aquesta flexibilitat, els orquestradors n'han de ser conscient. Aquesta tesi demostra que aplicant tècniques conscients de l'aplicació aplicades a la gestió de recursos permeten millorar la qualitat del servei a través de la desagregació de recursos. Habilitar la desagregació de recursos porta a una reducció de fins al 49% els deadlines perduts comparat a una política tradicional. Aquesta reducció pot incrementar-se fins al 100% quan s'habilita la consciència de l'aplicació. A més a més, aquesta tesi demostra que el particionat de GPU combinat amb la desagregació millora encara més la flexibilitat. Aquesta millora permet aconseguir els mateixos resultats amb la meitat de recursos. És a dir, amb una sola GPU física particionada i desagregada, els mateixos resultats són obtinguts que utilitzant-ne dues desagregades però no particionades. Finalment, aquesta tesi demostra que la gestió de la fragmentació de recursos és una peça clau quan la quantitat de recursos és limitada en un conjunt heterogeni de recursos. Pel cas d'un conjunt heterogeni de recursos, i especialment quan aquests recursos tenen molta demanda però són limitats en quantitat. És a dir, quan la demanda pels recursos és inesperadament alta, aquesta tesi proposa una tècnica minimitzant la fragmentació que redueix els deadlines perduts comparats a una política de desagregació de fins al 86%.Arquitectura de computador

    Proceedings of the Second International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2015) Krakow, Poland

    Get PDF
    Proceedings of: Second International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2015). Krakow (Poland), September 10-11, 2015
    corecore