130 research outputs found

    Virtualization services: scalable methods for virtualizing multicore systems

    Get PDF
    Multi-core technology is bringing parallel processing capabilities from servers to laptops and even handheld devices. At the same time, platform support for system virtualization is making it easier to consolidate server and client resources, when and as needed by applications. This consolidation is achieved by dynamically mapping the virtual machines on which applications run to underlying physical machines and their processing cores. Low cost processor and I/O virtualization methods efficiently scaled to different numbers of processing cores and I/O devices are key enablers of such consolidation. This dissertation develops and evaluates new methods for scaling virtualization functionality to multi-core and future many-core systems. Specifically, it re-architects virtualization functionality to improve scalability and better exploit multi-core system resources. Results from this work include a self-virtualized I/O abstraction, which virtualizes I/O so as to flexibly use different platforms' processing and I/O resources. Flexibility affords improved performance and resource usage and most importantly, better scalability than that offered by current I/O virtualization solutions. Further, by describing system virtualization as a service provided to virtual machines and the underlying computing platform, this service can be enhanced to provide new and innovative functionality. For example, a virtual device may provide obfuscated data to guest operating systems to maintain data privacy; it could mask differences in device APIs or properties to deal with heterogeneous underlying resources; or it could control access to data based on the ``trust' properties of the guest VM. This thesis demonstrates that extended virtualization services are superior to existing operating system or user-level implementations of such functionality, for multiple reasons. First, this solution technique makes more efficient use of key performance-limiting resource in multi-core systems, which are memory and I/O bandwidth. Second, this solution technique better exploits the parallelism inherent in multi-core architectures and exhibits good scalability properties, in part because at the hypervisor level, there is greater control in precisely which and how resources are used to realize extended virtualization services. Improved control over resource usage makes it possible to provide value-added functionalities for both guest VMs and the platform. Specific instances of virtualization services described in this thesis are the network virtualization service that exploits heterogeneous processing cores, a storage virtualization service that provides location transparent access to block devices by extending the functionality provided by network virtualization service, a multimedia virtualization service that allows efficient media device sharing based on semantic information, and an object-based storage service with enhanced access control.Ph.D.Committee Chair: Schwan, Karsten; Committee Member: Ahamad, Mustaq; Committee Member: Fujimoto, Richard; Committee Member: Gavrilovska, Ada; Committee Member: Owen, Henry; Committee Member: Xenidis, Jim

    Software Defined Application Delivery Networking

    Get PDF
    In this thesis we present the architecture, design, and prototype implementation details of AppFabric. AppFabric is a next generation application delivery platform for easily creating, managing and controlling massively distributed and very dynamic application deployments that may span multiple datacenters. Over the last few years, the need for more flexibility, finer control, and automatic management of large (and messy) datacenters has stimulated technologies for virtualizing the infrastructure components and placing them under software-based management and control; generically called Software-defined Infrastructure (SDI). However, current applications are not designed to leverage this dynamism and flexibility offered by SDI and they mostly depend on a mix of different techniques including manual configuration, specialized appliances (middleboxes), and (mostly) proprietary middleware solutions together with a team of extremely conscientious and talented system engineers to get their applications deployed and running. AppFabric, 1) automates the whole control and management stack of application deployment and delivery, 2) allows application architects to define logical workflows consisting of application servers, message-level middleboxes, packet-level middleboxes and network services (both, local and wide-area) composed over application-level routing policies, and 3) provides the abstraction of an application cloud that allows the application to dynamically (and automatically) expand and shrink its distributed footprint across multiple geographically distributed datacenters operated by different cloud providers. The architecture consists of a hierarchical control plane system called Lighthouse and a fully distributed data plane design (with no special hardware components such as service orchestrators, load balancers, message brokers, etc.) called OpenADN . The current implementation (under active development) consists of ~10000 lines of python and C code. AppFabric will allow applications to fully leverage the opportunities provided by modern virtualized Software-Defined Infrastructures. It will serve as the platform for deploying massively distributed, and extremely dynamic next generation application use-cases, including: Internet-of-Things/Cyber-Physical Systems: Through support for managing distributed gather-aggregate topologies common to most Internet-of-Things(IoT) and Cyber-Physical Systems(CPS) use-cases. By their very nature, IoT and CPS use cases are massively distributed and have different levels of computation and storage requirements at different locations. Also, they have variable latency requirements for their different distributed sites. Some services, such as device controllers, in an Iot/CPS application workflow may need to gather, process and forward data under near-real time constraints and hence need to be as close to the device as possible. Other services may need more computation to process aggregated data to drive long term business intelligence functions. AppFabric has been designed to provide support for such very dynamic, highly diversified and massively distributed application use-cases. Network Function Virtualization: Through support for heterogeneous workflows, application-aware networking, and network-aware application deployments, AppFabric will enable new partnerships between Application Service Providers (ASPs) and Network Service Providers (NSPs). An application workflow in AppFabric may comprise of application services, packet and message-level middleboxes, and network transport services chained together over an application-level routing substrate. The Application-level routing substrate allows policy-based service chaining where the application may specify policies for routing their application traffic over different services based on application-level content or context. Virtual worlds/multiplayer games: Through support for creating, managing and controlling dynamic and distributed application clouds needed by these applications. AppFabric allows the application to easily specify policies to dynamically grow and shrink the application\u27s footprint over different geographical sites, on-demand. Mobile Apps: Through support for extremely diversified and very dynamic application contexts typical of such applications. Also, AppFabric provides support for automatically managing massively distributed service deployment and controlling application traffic based on application-level policies. This allows mobile applications to provide the best Quality-of-Experience to its users without This thesis is the first to handle and provide a complete solution for such a complex and relevant architectural problem that is expected to touch each of our lives by enabling exciting new application use-cases that are not possible today. Also, AppFabric is a non-proprietary platform that is expected to spawn lots of innovations both in the design of the platform itself and the features it provides to applications. AppFabric still needs many iterations, both in terms of design and implementation maturity. This thesis is not the end of journey for AppFabric but rather just the beginning

    QoS-aware architectures, technologies, and middleware for the cloud continuum

    Get PDF
    The recent trend of moving Cloud Computing capabilities to the Edge of the network is reshaping how applications and their middleware supports are designed, deployed, and operated. This new model envisions a continuum of virtual resources between the traditional cloud and the network edge, which is potentially more suitable to meet the heterogeneous Quality of Service (QoS) requirements of diverse application domains and next-generation applications. Several classes of advanced Internet of Things (IoT) applications, e.g., in the industrial manufacturing domain, are expected to serve a wide range of applications with heterogeneous QoS requirements and call for QoS management systems to guarantee/control performance indicators, even in the presence of real-world factors such as limited bandwidth and concurrent virtual resource utilization. The present dissertation proposes a comprehensive QoS-aware architecture that addresses the challenges of integrating cloud infrastructure with edge nodes in IoT applications. The architecture provides end-to-end QoS support by incorporating several components for managing physical and virtual resources. The proposed architecture features: i) a multilevel middleware for resolving the convergence between Operational Technology (OT) and Information Technology (IT), ii) an end-to-end QoS management approach compliant with the Time-Sensitive Networking (TSN) standard, iii) new approaches for virtualized network environments, such as running TSN-based applications under Ultra-low Latency (ULL) constraints in virtual and 5G environments, and iv) an accelerated and deterministic container overlay network architecture. Additionally, the QoS-aware architecture includes two novel middlewares: i) a middleware that transparently integrates multiple acceleration technologies in heterogeneous Edge contexts and ii) a QoS-aware middleware for Serverless platforms that leverages coordination of various QoS mechanisms and virtualized Function-as-a-Service (FaaS) invocation stack to manage end-to-end QoS metrics. Finally, all architecture components were tested and evaluated by leveraging realistic testbeds, demonstrating the efficacy of the proposed solutions

    A network QoS management architecture for virtualization environments

    Get PDF
    Network quality of service (QoS) and its management are concerned with providing, guaranteeing and reporting properties of data flows within computer networks. For the past two decades, virtualization has been becoming a very popular tool in data centres, yet, without network QoS management capabilities. With virtualization, the management focus shifts from physical components and topologies, towards virtual infrastructures (VI) and their purposes. VIs are designed and managed as independent isolated entities. Without network QoS management capabilities, VIs cannot offer the same services and service levels as physical infrastructures can, leaving VIs at a disadvantage with respect to applicability and efficiency. This thesis closes this gap and develops a management architecture, enabling network QoS management in virtulization environments. First, requirements are dervied, based on real world scenarios, yielding a validation reference for the proposed architecture. After that, a life cycle for VIs and a taxonomy for network links and virtual components are introduced, to arrange the network QoS management task with the general management of virtualization environments and enabling the creation of technology specific adaptors for integrating the technologies and sub-services used in virtualization environments. The core aspect, shaping the proposed management architecture, is a management loop and its corresponding strategy for identifying and ordering sub-tasks. Finally, a prototypical implementation showcases that the presented management approach is suited for network QoS management and enforcement in virtualization environments. The architecture fulfils its purpose, fulfilling all identified requirements. Ultimately, network QoS management is one amongst many aspects to management in virtualization environments and the herin presented architecture shows interfaces to other management areas, where integration is left as future work.Verwaltungsaufgaben für Netzdienstgüte umfassen das Bereitstellen, Sichern und Berichten von Flusseigenschaften in Rechnernetzen. Während der letzen zwei Jahrzehnte entwickelte sich Virtualisierung zu einer Schlüsseltechnologie für Rechenzentren, bisher ohne Möglichkeiten zum Verwalten der Netzdienstgüte. Der Einsatz von Virtualisierung verschiebt den Fokus beim Betrieb von Rechenzentren weg von physischen Komponenten und Netzen, hin zu virtuellen Infrastrukturen (VI) und ihren Einsatzzwecken. VIs werden als unabhängige, voneinander isolierte Einheiten entwickelt und verwaltet. Ohne Netzdienstgüte, sind VIs nicht so vielseitig und effizient einsetzbar wie physische Aufbauten. Diese Arbeit schließt diese Lücke mit der Entwicklung einer Managementarchitektur zur Verwaltung der Netzdienstgüte in Virtualisierungsumgebungen. Zunächst werden Anforderungen aus realen Szenarios abgeleitet, mit denen Architekturen bewertet werden können. Zur Abgrenzung der speziellen Aufgabe Netzdienstgüteverwaltung innerhalb des allgemeinen Managementproblems, wird anschließend ein Lebenszyklusmodell für VIs vorgestellt. Die Entwicklung einer Taxonomie für Kopplungen und Komponenten ermöglicht technologiespezifische Adaptoren zur Integration von in Virtualisierungsumgebungen eingesetzten Technologien. Kerngedanke hinter der entwickelten Architektur ist eine Rückkopplungsschleife und ihre einhergehende Methode zur Strukturierung und Anordnung von Teilproblemen. Abschließend zeigt eine prototypische Implementierung, dass dieser Ansatz für Verwaltung und Durchsetzung von Netzdienstgüte in Virtualisierungsumgebungen geeignet ist. Die Architektur kann ihren Zweck sowie die gestellten Anforderungen erfüllen. Schlussendlich ist Netzdienstgüte ein Bereich von vielen beim Betrieb von Virtualisierungsumgebungen. Die Architektur zeigt Schnittstellen zu anderen Bereichen auf, deren Integration zukünftigen Arbeiten überlassen bleibt

    Deliverable DJRA1.2. Solutions and protocols proposal for the network control, management and monitoring in a virtualized network context

    Get PDF
    This deliverable presents several research proposals for the FEDERICA network, in different subjects, such as monitoring, routing, signalling, resource discovery, and isolation. For each topic one or more possible solutions are elaborated, explaining the background, functioning and the implications of the proposed solutions.This deliverable goes further on the research aspects within FEDERICA. First of all the architecture of the control plane for the FEDERICA infrastructure will be defined. Several possibilities could be implemented, using the basic FEDERICA infrastructure as a starting point. The focus on this document is the intra-domain aspects of the control plane and their properties. Also some inter-domain aspects are addressed. The main objective of this deliverable is to lay great stress on creating and implementing the prototype/tool for the FEDERICA slice-oriented control system using the appropriate framework. This deliverable goes deeply into the definition of the containers between entities and their syntax, preparing this tool for the future implementation of any kind of algorithm related to the control plane, for both to apply UPB policies or to configure it by hand. We opt for an open solution despite the real time limitations that we could have (for instance, opening web services connexions or applying fast recovering mechanisms). The application being developed is the central element in the control plane, and additional features must be added to this application. This control plane, from the functionality point of view, is composed by several procedures that provide a reliable application and that include some mechanisms or algorithms to be able to discover and assign resources to the user. To achieve this, several topics must be researched in order to propose new protocols for the virtual infrastructure. The topics and necessary features covered in this document include resource discovery, resource allocation, signalling, routing, isolation and monitoring. All these topics must be researched in order to find a good solution for the FEDERICA network. Some of these algorithms have started to be analyzed and will be expanded in the next deliverable. Current standardization and existing solutions have been investigated in order to find a good solution for FEDERICA. Resource discovery is an important issue within the FEDERICA network, as manual resource discovery is no option, due to scalability requirement. Furthermore, no standardization exists, so knowledge must be obtained from related work. Ideally, the proposed solutions for these topics should not only be adequate specifically for this infrastructure, but could also be applied to other virtualized networks.Postprint (published version

    Development of a secure monitoring framework for optical disaggregated data centres

    Get PDF
    Data center (DC) infrastructures are a key piece of nowadays telecom and cloud services delivery, enabling the access and storage of enormous quantities of information as well as the execution of complex applications and services. Such aspect is being accentuated with the advent of 5G and beyond architectures, since a significant portion of the network and service functions are being deployed as specialized virtual elements inside dedicated DC infrastructures. As such, the development of new architectures to better exploit the resources of DC becomes of paramount importanceThe mismatch between the variability of resources required by running applications and the fixed amount of resources in server units severely limits resource utilization in today's Data Centers (DCs). The Disaggregated DC (DDC) paradigm was recently introduced to address these limitations. The main idea behind DDCs is to divide the various computational resources into independent hardware modules/blades, which are mounted in racks, bringing greater modularity and allowing operators to optimize their deployments for improved efficiency and performance, thus, offering high resource allocation flexibility. Moreover, to efficiently exploit the hardware blades and establish the connections across them according to upper layer requirements, a flexible control and management framework is required. In this regard, following current industrial trends, the Software Defined Networking (SDN) paradigm is one of the leading technologies for the control of DC infrastructures, allowing for the establishment of high-speed, low-latency optical connections between hardware components in DDCs in response to the demands of higher-level services and applications. With these concepts in mind, the primary objective of this thesis is to design and carry out the implementation of the control of a DDC infrastructure layer that is founded on the SDN principles and makes use of optical technologies for the intra-DC network fabric, highlighting the importance of quality control and monitoring. Thanks to several SDN agents, it becomes possible to gather statistics and metrics from the multiple infrastructure elements (computational blades and network equipment), allowing DC operators to monitor and make informed decisions on how to utilize the infrastructure resources to the greatest extent feasible. Indeed, quality assurance operations are of capital importance in modern DC infrastructures, thus, it becomes essential to guarantee a secure communication channel for gathering infrastructure metrics/statistics and enforcing (re-)configurations, closing the full loop, then addressing the security layer to secure the communication channel by encryption and providing authentication for the server and the client

    Design and implementation of an object storage system

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Implementation of a Private Cloud

    Get PDF
    The exponential growth of hardware requirements coupled with online services development costs have brought the need to create dynamic and resilient systems with networks able to handle high-density traffic. One of the emerging paradigms to achieve this is called Cloud Computing it proposes the creation of an elastic and modular computing architecture that allows dynamic allocation of hardware and network resources in order to meet the needs of applications. The creation of a Private Cloud based on the OpenStack platform implements this idea. This solution decentralizes the institution resources making it possible to aggregate resources that are physically spread across several areas of the globe and allows an optimization of computing and network resources. With this in mind, in this thesis a private cloud system was implemented that is capable of elastically leasing and releasing computing resources, allows the creation of public and private networks that connect computation instances and the launch of virtual machines that instantiate servers and services, and also isolate projects within the same system. The system expansion should start with the addition of extra nodes and the modernization of the existing ones, this expansion will also lead to the emergence of network problems which can be surpassed with the integration of Software Defined Network controllers
    corecore