63 research outputs found

    Challenges in real-time virtualization and predictable cloud computing

    Get PDF
    Cloud computing and virtualization technology have revolutionized general-purpose computing applications in the past decade. The cloud paradigm offers advantages through reduction of operation costs, server consolidation, flexible system configuration and elastic resource provisioning. However, despite the success of cloud computing for general-purpose computing, existing cloud computing and virtualization technology face tremendous challenges in supporting emerging soft real-time applications such as online video streaming, cloud-based gaming, and telecommunication management. These applications demand real-time performance in open, shared and virtualized computing environments. This paper identifies the technical challenges in supporting real-time applications in the cloud, surveys recent advancement in real-time virtualization and cloud computing technology, and offers research directions to enable cloud-based real-time applications in the future

    Improving Performance and Energy Efficiency of Heterogeneous Systems with rCUDA

    Full text link
    Tesis por compendio[ES] En la última década la utilización de la GPGPU (General Purpose computing in Graphics Processing Units; Computación de Propósito General en Unidades de Procesamiento Gráfico) se ha vuelto tremendamente popular en los centros de datos de todo el mundo. Las GPUs (Graphics Processing Units; Unidades de Procesamiento Gráfico) se han establecido como elementos aceleradores de cómputo que son usados junto a las CPUs formando sistemas heterogéneos. La naturaleza masivamente paralela de las GPUs, destinadas tradicionalmente al cómputo de gráficos, permite realizar operaciones numéricas con matrices de datos a gran velocidad debido al gran número de núcleos que integran y al gran ancho de banda de acceso a memoria que poseen. En consecuencia, aplicaciones de todo tipo de campos, tales como química, física, ingeniería, inteligencia artificial, ciencia de materiales, etc. que presentan este tipo de patrones de cómputo se ven beneficiadas, reduciendo drásticamente su tiempo de ejecución. En general, el uso de la aceleración del cómputo en GPUs ha significado un paso adelante y una revolución. Sin embargo, no está exento de problemas, tales como problemas de eficiencia energética, baja utilización de las GPUs, altos costes de adquisición y mantenimiento, etc. En esta tesis pretendemos analizar las principales carencias que presentan estos sistemas heterogéneos y proponer soluciones basadas en el uso de la virtualización remota de GPUs. Para ello hemos utilizado la herramienta rCUDA, desarrollada en la Universitat Politècnica de València, ya que multitud de publicaciones la avalan como el framework de virtualización remota de GPUs más avanzado de la actualidad. Los resutados obtenidos en esta tesis muestran que el uso de rCUDA en entornos de Cloud Computing incrementa el grado de libertad del sistema, ya que permite crear instancias virtuales de las GPUs físicas totalmente a medida de las necesidades de cada una de las máquinas virtuales. En entornos HPC (High Performance Computing; Computación de Altas Prestaciones), rCUDA también proporciona un mayor grado de flexibilidad de uso de las GPUs de todo el clúster de cómputo, ya que permite desacoplar totalmente la parte CPU de la parte GPU de las aplicaciones. Además, las GPUs pueden estar en cualquier nodo del clúster, independientemente del nodo en el que se está ejecutando la parte CPU de la aplicación. En general, tanto para Cloud Computing como en el caso de HPC, este mayor grado de flexibilidad se traduce en un aumento hasta 2x de la productividad de todo el sistema al mismo tiempo que se reduce el consumo energético en un 15%. Finalmente, también hemos desarrollado un mecanismo de migración de trabajos de la parte GPU de las aplicaciones que ha sido integrado dentro del framework rCUDA. Este mecanismo de migración ha sido evaluado y los resultados muestran claramente que, a cambio de una pequeña sobrecarga, alrededor de 400 milisegundos, en el tiempo de ejecución de las aplicaciones, es una potente herramienta con la que, de nuevo, aumentar la productividad y reducir el gasto energético del sistema. En resumen, en esta tesis se analizan los principales problemas derivados del uso de las GPUs como aceleradores de cómputo, tanto en entornos HPC como de Cloud Computing, y se demuestra cómo a través del uso del framework rCUDA, estos problemas pueden solucionarse. Además se desarrolla un potente mecanismo de migración de trabajos GPU, que integrado dentro del framework rCUDA, se convierte en una herramienta clave para los futuros planificadores de trabajos en clusters heterogéneos.[CA] En l'última dècada la utilització de la GPGPU(General Purpose computing in Graphics Processing Units; Computació de Propòsit General en Unitats de Processament Gràfic) s'ha tornat extremadament popular en els centres de dades de tot el món. Les GPUs (Graphics Processing Units; Unitats de Processament Gràfic) s'han establert com a elements acceleradors de còmput que s'utilitzen al costat de les CPUs formant sistemes heterogenis. La naturalesa massivament paral·lela de les GPUs, destinades tradicionalment al còmput de gràfics, permet realitzar operacions numèriques amb matrius de dades a gran velocitat degut al gran nombre de nuclis que integren i al gran ample de banda d'accés a memòria que posseeixen. En conseqüència, les aplicacions de tot tipus de camps, com ara química, física, enginyeria, intel·ligència artificial, ciència de materials, etc. que presenten aquest tipus de patrons de còmput es veuen beneficiades reduint dràsticament el seu temps d'execució. En general, l'ús de l'acceleració del còmput en GPUs ha significat un pas endavant i una revolució, però no està exempt de problemes, com ara poden ser problemes d'eficiència energètica, baixa utilització de les GPUs, alts costos d'adquisició i manteniment, etc. En aquesta tesi pretenem analitzar les principals mancances que presenten aquests sistemes heterogenis i proposar solucions basades en l'ús de la virtualització remota de GPUs. Per a això hem utilitzat l'eina rCUDA, desenvolupada a la Universitat Politècnica de València, ja que multitud de publicacions l'avalen com el framework de virtualització remota de GPUs més avançat de l'actualitat. Els resultats obtinguts en aquesta tesi mostren que l'ús de rCUDA en entorns de Cloud Computing incrementa el grau de llibertat del sistema, ja que permet crear instàncies virtuals de les GPUs físiques totalment a mida de les necessitats de cadascuna de les màquines virtuals. En entorns HPC (High Performance Computing; Computació d'Altes Prestacions), rCUDA també proporciona un major grau de flexibilitat en l'ús de les GPUs de tot el clúster de còmput, ja que permet desacoblar totalment la part CPU de la part GPU de les aplicacions. A més, les GPUs poden estar en qualsevol node del clúster, sense importar el node en el qual s'està executant la part CPU de l'aplicació. En general, tant per a Cloud Computing com en el cas del HPC, aquest major grau de flexibilitat es tradueix en un augment fins 2x de la productivitat de tot el sistema al mateix temps que es redueix el consum energètic en aproximadament un 15%. Finalment, també hem desenvolupat un mecanisme de migració de treballs de la part GPU de les aplicacions que ha estat integrat dins del framework rCUDA. Aquest mecanisme de migració ha estat avaluat i els resultats mostren clarament que, a canvi d'una petita sobrecàrrega, al voltant de 400 mil·lisegons, en el temps d'execució de les aplicacions, és una potent eina amb la qual, de nou, augmentar la productivitat i reduir la despesa energètica de sistema. En resum, en aquesta tesi s'analitzen els principals problemes derivats de l'ús de les GPUs com acceleradors de còmput, tant en entorns HPC com de Cloud Computing, i es demostra com a través de l'ús del framework rCUDA, aquests problemes poden solucionar-se. A més es desenvolupa un potent mecanisme de migració de treballs GPU, que integrat dins del framework rCUDA, esdevé una eina clau per als futurs planificadors de treballs en clústers heterogenis.[EN] In the last decade the use of GPGPU (General Purpose computing in Graphics Processing Units) has become extremely popular in data centers around the world. GPUs (Graphics Processing Units) have been established as computational accelerators that are used alongside CPUs to form heterogeneous systems. The massively parallel nature of GPUs, traditionally intended for graphics computing, allows to perform numerical operations with data arrays at high speed. This is achieved thanks to the large number of cores GPUs integrate and the large bandwidth of memory access. Consequently, applications of all kinds of fields, such as chemistry, physics, engineering, artificial intelligence, materials science, and so on, presenting this type of computational patterns are benefited by drastically reducing their execution time. In general, the use of computing acceleration provided by GPUs has meant a step forward and a revolution, but it is not without problems, such as energy efficiency problems, low utilization of GPUs, high acquisition and maintenance costs, etc. In this PhD thesis we aim to analyze the main shortcomings of these heterogeneous systems and propose solutions based on the use of remote GPU virtualization. To that end, we have used the rCUDA middleware, developed at Universitat Politècnica de València. Many publications support rCUDA as the most advanced remote GPU virtualization framework nowadays. The results obtained in this PhD thesis show that the use of rCUDA in Cloud Computing environments increases the degree of freedom of the system, as it allows to create virtual instances of the physical GPUs fully tailored to the needs of each of the virtual machines. In HPC (High Performance Computing) environments, rCUDA also provides a greater degree of flexibility in the use of GPUs throughout the computing cluster, as it allows the CPU part to be completely decoupled from the GPU part of the applications. In addition, GPUs can be on any node in the cluster, regardless of the node on which the CPU part of the application is running. In general, both for Cloud Computing and in the case of HPC, this greater degree of flexibility translates into an up to 2x increase in system-wide throughput while reducing energy consumption by approximately 15%. Finally, we have also developed a job migration mechanism for the GPU part of applications that has been integrated within the rCUDA middleware. This migration mechanism has been evaluated and the results clearly show that, in exchange for a small overhead of about 400 milliseconds in the execution time of the applications, it is a powerful tool with which, again, we can increase productivity and reduce energy foot print of the computing system. In summary, this PhD thesis analyzes the main problems arising from the use of GPUs as computing accelerators, both in HPC and Cloud Computing environments, and demonstrates how thanks to the use of the rCUDA middleware these problems can be addressed. In addition, a powerful GPU job migration mechanism is being developed, which, integrated within the rCUDA framework, becomes a key tool for future job schedulers in heterogeneous clusters.This work jointly supported by the Fundación Séneca (Agencia Regional de Ciencia y Tecnología, Región de Murcia) under grants (20524/PDC/18, 20813/PI/18 and 20988/PI/18) and by the Spanish MEC and European Commission FEDER under grants TIN2015-66972-C5-3-R, TIN2016-78799-P and CTQ2017-87974-R (AEI/FEDER, UE). We also thank NVIDIA for hardware donation under GPU Educational Center 2014-2016 and Research Center 2015-2016. The authors thankfully acknowledge the computer resources at CTE-POWER and the technical support provided by Barcelona Supercomputing Center - Centro Nacional de Supercomputación (RES-BCV-2018-3-0008). Furthermore, researchers from Universitat Politècnica de València are supported by the Generalitat Valenciana under Grant PROMETEO/2017/077. Authors are also grateful for the generous support provided by Mellanox Technologies Inc. Prof. Pradipta Purkayastha, from Department of Chemical Sciences, Indian Institute of Science Education and Research (IISER) Kolkata, is acknowledged for kindly providing the initial ligand and DNA structures.Prades Gasulla, J. (2021). Improving Performance and Energy Efficiency of Heterogeneous Systems with rCUDA [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/168081TESISCompendi

    Viriot: A cloud of things that offers iot infrastructures as a service

    Get PDF
    Many cloud providers offer IoT services that simplify the collection and processing of IoT information. However, the IoT infrastructure composed of sensors and actuators that produces this information remains outside the cloud; therefore, application developers must install, connect and manage the cloud. This requirement can be a market barrier, especially for small/medium software companies that cannot afford the infrastructural costs associated with it and would only prefer to focus on IoT application developments. Motivated by the wish to eliminate this barrier, this paper proposes a Cloud of Things platform, called VirIoT, which fully brings the Infrastructure as a service model typical of cloud computing to the world of Internet of Things. VirIoT provides users with virtual IoT infrastructures (Virtual Silos) composed of virtual things, with which users can interact through dedicated and standardized broker servers in which the technology can be chosen among those offered by the platform, such as oneM2M, NGSI and NGSI-LD. VirIoT allows developers to focus their efforts exclusively on IoT applications without worrying about infrastructure management and allows cloud providers to expand their IoT services portfolio. VirIoT uses external things and cloud/edge computing resources to deliver the IoT virtualization services. Its open-source architecture is microservice-based and runs on top of a distributed Kubernetes platform with nodes in central and edge data centers. The architecture is scalable, efficient and able to support the continuous integration of heterogeneous things and IoT standards, taking care of interoperability issues. Using a VirIoT deployment spanning data centers in Europe and Japan, we conducted a performance evaluation with a two-fold objective: showing the efficiency and scalability of the architecture; and leveraging VirIoT’s ability to integrate different IoT standards in order to make a fair comparison of some open-source IoT Broker implementations, namely Mobius for oneM2M, Orion for NGSIv2, Orion-LD and Scorpio for NGSI-LD

    Towards a Cyber-Physical Manufacturing Cloud through Operable Digital Twins and Virtual Production Lines

    Get PDF
    In last decade, the paradigm of Cyber-Physical Systems (CPS) has integrated industrial manufacturing systems with Cloud Computing technologies for Cloud Manufacturing. Up to 2015, there were many CPS-based manufacturing systems that collected real-time machining data to perform remote monitoring, prognostics and health management, and predictive maintenance. However, these CPS-integrated and network ready machines were not directly connected to the elements of Cloud Manufacturing and required human-in-the-loop. Addressing this gap, we introduced a new paradigm of Cyber-Physical Manufacturing Cloud (CPMC) that bridges a gap between physical machines and virtual space in 2017. CPMC virtualizes machine tools in cloud through web services for direct monitoring and operations through Internet. Fundamentally, CPMC differs with contemporary modern manufacturing paradigms. For instance, CPMC virtualizes machining tools in cloud using remote services and establish direct Internet-based communication, which is overlooked in existing Cloud Manufacturing systems. Another contemporary, namely cyber-physical production systems enable networked access to machining tools. Nevertheless, CPMC virtualizes manufacturing resources in cloud and monitor and operate them over the Internet. This dissertation defines the fundamental concepts of CPMC and expands its horizon in different aspects of cloud-based virtual manufacturing such as Digital Twins and Virtual Production Lines. Digital Twin (DT) is another evolving concept since 2002 that creates as-is replicas of machining tools in cyber space. Up to 2018, many researchers proposed state-of-the-art DTs, which only focused on monitoring production lifecycle management through simulations and data driven analytics. But they overlooked executing manufacturing processes through DTs from virtual space. This dissertation identifies that DTs can be made more productive if they engage directly in direct execution of manufacturing operations besides monitoring. Towards this novel approach, this dissertation proposes a new operable DT model of CPMC that inherits the features of direct monitoring and operations from cloud. This research envisages and opens the door for future manufacturing systems where resources are developed as cloud-based DTs for remote and distributed manufacturing. Proposed concepts and visions of DTs have spawned the following fundamental researches. This dissertation proposes a novel concept of DT based Virtual Production Lines (VPL) in CPMC in 2019. It presents a design of a service-oriented architecture of DTs that virtualizes physical manufacturing resources in CPMC. Proposed DT architecture offers a more compact and integral service-oriented virtual representations of manufacturing resources. To re-configure a VPL, one requirement is to establish DT-to-DT collaborations in manufacturing clouds, which replicates to concurrent resource-to-resource collaborations in shop floors. Satisfying the above requirements, this research designs a novel framework to easily re-configure, monitor and operate VPLs using DTs of CPMC. CPMC publishes individual web services for machining tools, which is a traditional approach in the domain of service computing. But this approach overcrowds service registry databases. This dissertation introduces a novel fundamental service publication and discovery approach in 2020, OpenDT, which publishes DTs with collections of services. Experimental results show easier discovery and remote access of DTs while re-configuring VPLs. Proposed researches in this dissertation have received numerous citations both from industry and academia, clearly proving impacts of research contributions

    Green IS in Infrastructure Software

    Get PDF
    As the world is becoming a more connected place, organizations become more dependent on infrastructure software such as operating systems and middleware. Infrastructure software and the hardware it is operated on consumes a lot of electricity and in a world where the climate threat is increasingly imminent, aspects of Green IS are more relevant than ever. There are a lot of research done on the characteristics of Green IS but not so much on what is practically adopted, especially not within organizations whose main industry is not IT. In this study, we examine to what extent retail and manufacturing organizations adopt aspects of Green IS to increase their impact on environmental sustainability. Four infrastructure software platforms were surveyed through four group interviews with a total of 25 participants, on their platform’s adoption of five Green IS aspects. We found that virtualization and cloud computing as well as efficiency and optimization are well adopted aspects, where automation and monitoring and KPIs are not as prominent. The last aspect, data growth management, was in all cases very little or not at all adopted

    On the Enhancement of Remote GPU Virtualization in High Performance Clusters

    Full text link
    Graphics Processing Units (GPUs) are being adopted in many computing facilities given their extraordinary computing power, which makes it possible to accelerate many general purpose applications from different domains. However, GPUs also present several side effects, such as increased acquisition costs as well as larger space requirements. They also require more powerful energy supplies. Furthermore, GPUs still consume some amount of energy while idle and their utilization is usually low for most workloads. In a similar way to virtual machines, the use of virtual GPUs may address the aforementioned concerns. In this regard, the remote GPU virtualization mechanism allows an application being executed in a node of the cluster to transparently use the GPUs installed at other nodes. Moreover, this technique allows to share the GPUs present in the computing facility among the applications being executed in the cluster. In this way, several applications being executed in different (or the same) cluster nodes can share one or more GPUs located in other nodes of the cluster. Sharing GPUs should increase overall GPU utilization, thus reducing the negative impact of the side effects mentioned before. Reducing the total amount of GPUs installed in the cluster may also be possible. In this dissertation we enhance one framework offering remote GPU virtualization capabilities, referred to as rCUDA, for its use in high-performance clusters. While the initial prototype version of rCUDA demonstrated its functionality, it also revealed concerns with respect to usability, performance, and support for new GPU features, which prevented its used in production environments. These issues motivated this thesis, in which all the research is primarily conducted with the aim of turning rCUDA into a production-ready solution for eventually transferring it to industry. The new version of rCUDA resulting from this work presents a reduction of up to 35% in execution time of the applications analyzed with respect to the initial version. Compared to the use of local GPUs, the overhead of this new version of rCUDA is below 5% for the applications studied when using the latest high-performance computing networks available.Las unidades de procesamiento gráfico (Graphics Processing Units, GPUs) están siendo utilizadas en muchas instalaciones de computación dada su extraordinaria capacidad de cálculo, la cual hace posible acelerar muchas aplicaciones de propósito general de diferentes dominios. Sin embargo, las GPUs también presentan algunas desventajas, como el aumento de los costos de adquisición, así como mayores requerimientos de espacio. Asimismo, también requieren un suministro de energía más potente. Además, las GPUs consumen una cierta cantidad de energía aún estando inactivas, y su utilización suele ser baja para la mayoría de las cargas de trabajo. De manera similar a las máquinas virtuales, el uso de GPUs virtuales podría hacer frente a los inconvenientes mencionados. En este sentido, el mecanismo de virtualización remota de GPUs permite que una aplicación que se ejecuta en un nodo de un clúster utilice de forma transparente las GPUs instaladas en otros nodos de dicho clúster. Además, esta técnica permite compartir las GPUs presentes en el clúster entre las aplicaciones que se ejecutan en el mismo. De esta manera, varias aplicaciones que se ejecutan en diferentes nodos de clúster (o los mismos) pueden compartir una o más GPUs ubicadas en otros nodos del clúster. Compartir GPUs aumenta la utilización general de la GPU, reduciendo así el impacto negativo de las desventajas anteriormente mencionadas. De igual forma, este mecanismo también permite reducir la cantidad total de GPUs instaladas en el clúster. En esta tesis mejoramos un entorno de trabajo llamado rCUDA, el cual ofrece funcionalidades de virtualización remota de GPUs para su uso en clusters de altas prestaciones. Si bien la versión inicial del prototipo de rCUDA demostró su funcionalidad, también reveló dificultades con respecto a la usabilidad, el rendimiento y el soporte para nuevas características de las GPUs, lo cual impedía su uso en entornos de producción. Estas consideraciones motivaron la presente tesis, en la que toda la investigación llevada a cabo tiene como objetivo principal convertir rCUDA en una solución lista para su uso entornos de producción, con la finalidad de transferirla eventualmente a la industria. La nueva versión de rCUDA resultante de este trabajo presenta una reducción de hasta el 35% en el tiempo de ejecución de las aplicaciones analizadas con respecto a la versión inicial. En comparación con el uso de GPUs locales, la sobrecarga de esta nueva versión de rCUDA es inferior al 5% para las aplicaciones estudiadas cuando se utilizan las últimas redes de computación de altas prestaciones disponibles.Les unitats de processament gràfic (Graphics Processing Units, GPUs) estan sent utilitzades en moltes instal·lacions de computació donada la seva extraordinària capacitat de càlcul, la qual fa possible accelerar moltes aplicacions de propòsit general de diferents dominis. No obstant això, les GPUs també presenten alguns desavantatges, com l'augment dels costos d'adquisició, així com major requeriment d'espai. Així mateix, també requereixen un subministrament d'energia més potent. A més, les GPUs consumeixen una certa quantitat d'energia encara estant inactives, i la seua utilització sol ser baixa per a la majoria de les càrregues de treball. D'una manera semblant a les màquines virtuals, l'ús de GPUs virtuals podria fer front als inconvenients esmentats. En aquest sentit, el mecanisme de virtualització remota de GPUs permet que una aplicació que s'executa en un node d'un clúster utilitze de forma transparent les GPUs instal·lades en altres nodes d'aquest clúster. A més, aquesta tècnica permet compartir les GPUs presents al clúster entre les aplicacions que s'executen en el mateix. D'aquesta manera, diverses aplicacions que s'executen en diferents nodes de clúster (o els mateixos) poden compartir una o més GPUs ubicades en altres nodes del clúster. Compartir GPUs augmenta la utilització general de la GPU, reduint així l'impacte negatiu dels desavantatges anteriorment esmentades. A més a més, aquest mecanisme també permet reduir la quantitat total de GPUs instal·lades al clúster. En aquesta tesi millorem un entorn de treball anomenat rCUDA, el qual ofereix funcionalitats de virtualització remota de GPUs per al seu ús en clústers d'altes prestacions. Si bé la versió inicial del prototip de rCUDA va demostrar la seua funcionalitat, també va revelar dificultats pel que fa a la usabilitat, el rendiment i el suport per a noves característiques de les GPUs, la qual cosa impedia el seu ús en entorns de producció. Aquestes consideracions van motivar la present tesi, en què tota la investigació duta a terme té com a objectiu principal convertir rCUDA en una solució preparada per al seu ús entorns de producció, amb la finalitat de transferir-la eventualment a la indústria. La nova versió de rCUDA resultant d'aquest treball presenta una reducció de fins al 35% en el temps d'execució de les aplicacions analitzades respecte a la versió inicial. En comparació amb l'ús de GPUs locals, la sobrecàrrega d'aquesta nova versió de rCUDA és inferior al 5% per a les aplicacions estudiades quan s'utilitzen les últimes xarxes de computació d'altes prestacions disponibles.Reaño González, C. (2017). On the Enhancement of Remote GPU Virtualization in High Performance Clusters [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/86219TESISPremios Extraordinarios de tesis doctorale

    Virtualization services: scalable methods for virtualizing multicore systems

    Get PDF
    Multi-core technology is bringing parallel processing capabilities from servers to laptops and even handheld devices. At the same time, platform support for system virtualization is making it easier to consolidate server and client resources, when and as needed by applications. This consolidation is achieved by dynamically mapping the virtual machines on which applications run to underlying physical machines and their processing cores. Low cost processor and I/O virtualization methods efficiently scaled to different numbers of processing cores and I/O devices are key enablers of such consolidation. This dissertation develops and evaluates new methods for scaling virtualization functionality to multi-core and future many-core systems. Specifically, it re-architects virtualization functionality to improve scalability and better exploit multi-core system resources. Results from this work include a self-virtualized I/O abstraction, which virtualizes I/O so as to flexibly use different platforms' processing and I/O resources. Flexibility affords improved performance and resource usage and most importantly, better scalability than that offered by current I/O virtualization solutions. Further, by describing system virtualization as a service provided to virtual machines and the underlying computing platform, this service can be enhanced to provide new and innovative functionality. For example, a virtual device may provide obfuscated data to guest operating systems to maintain data privacy; it could mask differences in device APIs or properties to deal with heterogeneous underlying resources; or it could control access to data based on the ``trust' properties of the guest VM. This thesis demonstrates that extended virtualization services are superior to existing operating system or user-level implementations of such functionality, for multiple reasons. First, this solution technique makes more efficient use of key performance-limiting resource in multi-core systems, which are memory and I/O bandwidth. Second, this solution technique better exploits the parallelism inherent in multi-core architectures and exhibits good scalability properties, in part because at the hypervisor level, there is greater control in precisely which and how resources are used to realize extended virtualization services. Improved control over resource usage makes it possible to provide value-added functionalities for both guest VMs and the platform. Specific instances of virtualization services described in this thesis are the network virtualization service that exploits heterogeneous processing cores, a storage virtualization service that provides location transparent access to block devices by extending the functionality provided by network virtualization service, a multimedia virtualization service that allows efficient media device sharing based on semantic information, and an object-based storage service with enhanced access control.Ph.D.Committee Chair: Schwan, Karsten; Committee Member: Ahamad, Mustaq; Committee Member: Fujimoto, Richard; Committee Member: Gavrilovska, Ada; Committee Member: Owen, Henry; Committee Member: Xenidis, Jim

    Towards a cloud enabler : from an optical network resource provisioning system to a generalized architecture for dynamic infrastructure services provisioning

    Get PDF
    This work was developed during a period where most of the optical management and provisioning system where manual and proprietary. This work contributed to the evolution of the state of the art of optical networks with new architectures and advanced virtual infrastructure services. The evolution of optical networks, and internet globally, have been very promising during the last decade. The impact of mobile technology, grid, cloud computing, HDTV, augmented reality and big data, among many others, have driven the evolution of optical networks towards current service technologies, mostly based on SDN (Software Defined Networking) architectures and NFV(Network Functions Virtualisation). Moreover, the convergence of IP/Optical networks and IT services, and the evolution of the internet and optical infrastructures, have generated novel service orchestrators and open source frameworks. In fact, technology has evolved that fast that none could foresee how important Internet is for our current lives. Said in other words, technology was forced to evolve in a way that network architectures became much more transparent, dynamic and flexible to the end users (applications, user interfaces or simple APIs). This Thesis exposes the work done on defining new architectures for Service Oriented Networks and the contribution to the state of the art. The research work is divided into three topics. It describes the evolution from a Network Resource Provisioning System to an advanced Service Plane, and ends with a new architecture that virtualized the optical infrastructure in order to provide coordinated, on-demand and dynamic services between the application and the network infrastructure layer, becoming an enabler for the new generation of cloud network infrastructures. The work done on defining a Network Resource Provisioning System established the first bases for future work on network infrastructure virtualization. The UCLP (User Light Path Provisioning) technology was the first attempt for Customer Empowered Networks and Articulated Private Networks. It empowered the users and brought virtualization and partitioning functionalities into the optical data plane, with new interfaces for dynamic service provisioning. The work done within the development of a new Service Plane allowed the provisioning of on-demand connectivity services from the application, and in a multi-domain and multi-technology scenario based on a virtual network infrastructure composed of resources from different infrastructure providers. This Service Plane facilitated the deployment of applications consuming large amounts of data under deterministic conditions, so allowing the networks behave as a Grid-class resource. It became the first on-demand provisioning system that at lower levels allowed the creation of one virtual domain composed from resources of different providers. The last research topic presents an architecture that consolidated the work done in virtualisation while enhancing the capabilities to upper layers, so fully integrating the optical network infrastructure into the cloud environment, and so providing an architecture that enabled cloud services by integrating the request of optical network and IT infrastructure services together at the same level. It set up a new trend into the research community and evolved towards the technology we use today based on SDN and NFV. Summing up, the work presented is focused on the provisioning of virtual infrastructures from the architectural point of view of optical networks and IT infrastructures, together with the design and definition of novel service layers. It means, architectures that enabled the creation of virtual infrastructures composed of optical networks and IT resources, isolated and provisioned on-demand and in advance with infrastructure re-planning functionalities, and a new set of interfaces to open up those services to applications or third parties.Aquesta tesi es va desenvolupar durant un període on la majoria de sistemes de gestió de xarxa òptica eren manuals i basats en sistemes propietaris. En aquest sentit, la feina presentada va contribuir a l'evolució de l'estat de l'art de les xarxes òptiques tant a nivell d’arquitectures com de provisió d’infraestructures virtuals. L'evolució de les xarxes òptiques, i d'Internet a nivell mundial, han estat molt prometedores durant l'última dècada. L'impacte de la tecnologia mòbil, la computació al núvol, la televisió d'alta definició, la realitat augmentada i el big data, entre molts altres, han impulsat l'evolució cap a xarxes d’altes prestacions amb nous serveis basats en SDN (Software Defined Networking) i NFV (Funcions de xarxa La virtualització). D'altra banda, la convergència de xarxes òptiques i els serveis IT, junt amb l'evolució d'Internet i de les infraestructures òptiques, han generat nous orquestradors de serveis i frameworks basats en codi obert. La tecnologia ha evolucionat a una velocitat on ningú podria haver predit la importància que Internet està tenint en el nostre dia a dia. Dit en altres paraules, la tecnologia es va veure obligada a evolucionar d'una manera on les arquitectures de xarxa es fessin més transparent, dinàmiques i flexibles vers als usuaris finals (aplicacions, interfícies d'usuari o APIs simples). Aquesta Tesi presenta noves arquitectures de xarxa òptica orientades a serveis. El treball de recerca es divideix en tres temes. Es presenta un sistema de virtualització i aprovisionament de recursos de xarxa i la seva evolució a un pla de servei avançat, per acabar presentant el disseny d’una nova arquitectura capaç de virtualitzar la infraestructura òptica i IT i proporcionar serveis de forma coordinada, i sota demanda, entre l'aplicació i la capa d'infraestructura de xarxa òptica. Tot esdevenint un facilitador per a la nova generació d'infraestructures de xarxa en el núvol. El treball realitzat en la definició del sistema de virtualització de recursos va establir les primeres bases sobre la virtualització de la infraestructura de xarxa òptica en el marc de les “Customer Empowered Networks” i “Articulated Private Networks”. Amb l’objectiu de virtualitzar el pla de dades òptic, i oferir noves interfícies per a la provisió de serveis dinàmics de xarxa. En quant al pla de serveis presentat, aquest va facilitat la provisió de serveis de connectivitat sota demanda per part de l'aplicació, tant en entorns multi-domini, com en entorns amb múltiples tecnologies. Aquest pla de servei, anomenat Harmony, va facilitar el desplegament de noves aplicacions que consumien grans quantitats de dades en condicions deterministes. En aquest sentit, va permetre que les xarxes es comportessin com un recurs Grid, i per tant, va esdevenir el primer sistema d'aprovisionament sota demanda que permetia la creació de dominis virtuals de xarxa composts a partir de recursos de diferents proveïdors. Finalment, es presenta l’evolució d’un pla de servei cap una arquitectura global que consolida el treball realitzat a nivell de convergència d’infraestructures (òptica + IT) i millora les capacitats de les capes superiors. Aquesta arquitectura va facilitar la plena integració de la infraestructura de xarxa òptica a l'entorn del núvol. En aquest sentit, aquest resultats van evolucionar cap a les tendències actuals de SDN i NFV. En resum, el treball presentat es centra en la provisió d'infraestructures virtuals des del punt de vista d’arquitectures de xarxa òptiques i les infraestructures IT, juntament amb el disseny i definició de nous serveis de xarxa avançats, tal i com ho va ser el servei de re-planificació dinàmicaPostprint (published version

    Dynamic Reconfiguration with Virtual Services

    Get PDF
    We present a new architecture (virtual services) and accompanying implementation for dynamically adapting and reconfiguring the behavior of network services. Virtual services are a compositional middleware system that transparently interposes itself between a service and a client, overlaying new functionality with configurations of modules organized into processing chains. Virtual services allow programmers and system administrators to extend, modify, and reconfigure dynamically the behavior of existing services for which source code, object code, and administrative control are not available. Virtual service module processing chains are instantiated on a per connection or invocation basis, thereby enabling the reconfiguration of individual connections to a service without affecting other connections to the same service. To validate our architecture, we have implemented a virtual services software development toolkit and middleware server. Our experiments demonstrate that virtual services can modularize concerns that cut across network services. We show that we can reconfigure and enhance the security properties of services implemented as either TCP client-server systems, such as an HTTP server, or as remotely invocable objects, such as a Web service. We demonstrate that virtual services can reconfigure the following security properties and abilities: authentication, access control, secrecy/encryption, connection monitoring, security breach detection, adaptive response to security breaches, concurrent and dynamically mutable implementation of multiple security policies for different clients

    Architectural model for Collaboration in The Internet of Things : a Fog Computing based approach

    Get PDF
    Through sensors, actuators and other Internet-connected devices, applications and services are becoming able to perceive and react on the real world. Seamlessly integrating people, and devices is no longer a futuristic idea. Converging the physical world with the human-made realm into one network is rather a present and promising approach called The Internet of Things (IoT). A closer look at the phenomenon of IoT reveals many problems. The current trends are focusing on Cloud-centric approaches to deal with the heterogeneity and the scale of this network. The blessing of the Cloud computing becomes, however, a burden on latency-sensitive applications, which require processing and storage mechanisms in their proximity to meet low-latency, location and better context-awareness requirements. In addition to mobility support and high geographical distribution requirements. Fog computing is a new concept that focuses on extending the Cloud paradigm to the edge of the Internet of Things, via providing communication, computing, and access management support. This research project foresees and is driven by the promising opportunities of the concept behind Fog computing. In this thesis, we leverage this new concept by delivering a Collaboration Architecture for the Fog computing. This architecture constitutes a referential model to better design and to implement Fog platforms. It powers the freedom of abstraction to make development and deployment at the Fog nodes easier and more efficient. Moreover, it provides a nest where IoT-connected objects can interact and collaborate. To this end, we introduce expressive mechanisms to define and abstract objects, data analytics, and services. To leverage Fog nodes with dynamic services and service-based collaboration, we propose the concept of Operation: a formal way to dynamically generate new services through mechanisms such as aggregation, composition, and transformation. Finally, we deliver a comprehensive study and a collaboration-oriented access control model for the proposed architecture. Dans les dernières années, les avantages du Cloud Computing l’ont mis au cœur des architectures proposées pour l’Internet des Objets (IoT). L’infrastructure homogène, prédictible et performante a fait du Cloud une solution adéquate pour le traitement et l’analyse des données en provenance des objets de l’IoT. Cependant, les avantages de l’utilisation du Cloud se révèlent problématiques pour les systèmes IoT sensibles au temps de latence, et qui exigent la distribution géographique, la prise en compte de l’environnement local ainsi que la mobilité des objets. Le Fog Computing est un nouveau concept visant l'extension du Cloud vers la périphérie de l’IoT. Ainsi, il envisage une couche de nœuds (Fogs) permettant de fournir aux objets connectés un support à la gestion de la communication, à la persistance des données et à la gestion d’accès. Ce projet de recherche est motivé par les opportunités prometteuses du concept du Fog computing. Il anticipe ces opportunités et vise à proposer une architecture fédératrice, jusqu’à présent inexistante, pour la collaboration dans le Fog. De ce fait, dans cette thèse, nous tirons parti de l'idée derrière ce nouveau concept afin de proposer une architecture à cette fin. Cette architecture consiste en un modèle référentiel qui promeut à la fois une grande abstraction dans la conception des applications, ainsi que la facilité et l'efficacité dans le développement et le déploiement au niveau des nœuds de la couche du Fog. En effet, pour renforcer ces nœuds avec des services dynamiques, nous proposons des moyens formels pour la génération dynamique de nouveaux services à travers des opérations d'agrégations, de compositions ou de transformations. En conséquence, les nœuds du Fog deviennent un nid où les objets connectés peuvent interagir et collaborer à travers des mécanismes expressifs de définition et d'abstraction d’objets, des analyses de données et des services
    corecore