12 research outputs found

    QoS-aware architectures, technologies, and middleware for the cloud continuum

    Get PDF
    The recent trend of moving Cloud Computing capabilities to the Edge of the network is reshaping how applications and their middleware supports are designed, deployed, and operated. This new model envisions a continuum of virtual resources between the traditional cloud and the network edge, which is potentially more suitable to meet the heterogeneous Quality of Service (QoS) requirements of diverse application domains and next-generation applications. Several classes of advanced Internet of Things (IoT) applications, e.g., in the industrial manufacturing domain, are expected to serve a wide range of applications with heterogeneous QoS requirements and call for QoS management systems to guarantee/control performance indicators, even in the presence of real-world factors such as limited bandwidth and concurrent virtual resource utilization. The present dissertation proposes a comprehensive QoS-aware architecture that addresses the challenges of integrating cloud infrastructure with edge nodes in IoT applications. The architecture provides end-to-end QoS support by incorporating several components for managing physical and virtual resources. The proposed architecture features: i) a multilevel middleware for resolving the convergence between Operational Technology (OT) and Information Technology (IT), ii) an end-to-end QoS management approach compliant with the Time-Sensitive Networking (TSN) standard, iii) new approaches for virtualized network environments, such as running TSN-based applications under Ultra-low Latency (ULL) constraints in virtual and 5G environments, and iv) an accelerated and deterministic container overlay network architecture. Additionally, the QoS-aware architecture includes two novel middlewares: i) a middleware that transparently integrates multiple acceleration technologies in heterogeneous Edge contexts and ii) a QoS-aware middleware for Serverless platforms that leverages coordination of various QoS mechanisms and virtualized Function-as-a-Service (FaaS) invocation stack to manage end-to-end QoS metrics. Finally, all architecture components were tested and evaluated by leveraging realistic testbeds, demonstrating the efficacy of the proposed solutions

    Methods and Applications of Synthetic Data Generation

    Get PDF
    The advent of data mining and machine learning has highlighted the value of large and varied sources of data, while increasing the demand for synthetic data captures the structural and statistical characteristics of the original data without revealing personal or proprietary information contained in the original dataset. In this dissertation, we use examples from original research to show that, using appropriate models and input parameters, synthetic data that mimics the characteristics of real data can be generated with sufficient rate and quality to address the volume, structural complexity, and statistical variation requirements of research and development of digital information processing systems. First, we present a progression of research studies using a variety of tools to generate synthetic network traffic patterns, enabling us to observe relationships between network latency and communication pattern benchmarks at all levels of the network stack. We then present a framework for synthesizing large scale IoT data with complex structural characteristics in a scalable extraction and synthesis framework, and demonstrate the use of generated data in the benchmarking of IoT middleware. Finally, we detail research on synthetic image generation for deep learning models using 3D modeling. We find that synthetic images can be an effective technique for augmenting limited sets of real training data, and in use cases that benefit from incremental training or model specialization, we find that pretraining on synthetic images provided a usable base model for transfer learning

    Cloud-efficient modelling and simulation of magnetic nano materials

    Get PDF
    Scientific simulations are rarely attempted in a cloud due to the substantial performance costs of virtualization. Considerable communication overheads, intolerable latencies, and inefficient hardware emulation are the main reasons why this emerging technology has not been fully exploited. On the other hand, the progress of computing infrastructure nowadays is strongly dependent on perspective storage medium development, where efficient micromagnetic simulations play a vital role in future memory design. This thesis addresses both these topics by merging micromagnetic simulations with the latest OpenStack cloud implementation while providing a time and costeffective alternative to expensive computing centers. However, many challenges have to be addressed before a high-performance cloud platform emerges as a solution for problems in micromagnetic research communities. First, the best solver candidate has to be selected and further improved, particularly in the parallelization and process communication domain. Second, a 3-level cloud communication hierarchy needs to be recognized and each segment adequately addressed. The required steps include breaking the VMisolation for the host’s shared memory activation, cloud network-stack tuning, optimization, and efficient communication hardware integration. The project work concludes with practical measurements and confirmation of successfully implemented simulation into an open-source cloud environment. It is achieved that the renewed Magpar solver runs for the first time in the OpenStack cloud by using ivshmem for shared memory communication. Also, extensive measurements proved the effectiveness of our solutions, yielding from sixty percent to over ten times better results than those achieved in the standard cloud.Aufgrund der erheblichen Leistungskosten der Virtualisierung werden wissenschaftliche Simulationen in einer Cloud selten versucht. BetrĂ€chtlicher Kommunikationsaufwand, erhebliche Latenzen und ineffiziente Hardwareemulation sind die HauptgrĂŒnde, warum diese aufkommende Technologie nicht vollstĂ€ndig genutzt wurde. Andererseits hĂ€ngt der Fortschritt der Computertechnologie heutzutage stark von der Entwicklung perspektivischer Speichermedien ab, bei denen effiziente mikromagnetische Simulationen eine wichtige Rolle fĂŒr die zukĂŒnftige Speichertechnologie spielen. Diese Arbeit befasst sich mit diesen beiden Themen, indem mikromagnetische Simulationen mit der neuesten OpenStack Cloud-Implementierung zusammengefĂŒhrt werden, um eine zeit- und kostengĂŒnstige Alternative zu teuren Rechenzentren bereitzustellen. Viele Herausforderungen mĂŒssen jedoch angegangen werden, bevor eine leistungsstarke Cloud-Plattform als Lösung fĂŒr Probleme in mikromagnetischen Forschungsgemeinschaften entsteht. ZunĂ€chst muss der beste Kandidat fĂŒr die Lösung ausgewĂ€hlt und weiter verbessert werden, insbesondere im Bereich der Parallelisierung und Prozesskommunikation. Zweitens muss eine 3-stufige CloudKommunikationshierarchie erkannt und jedes Segment angemessen adressiert werden. Die erforderlichen Schritte umfassen das Aufheben der VM-Isolation, um den gemeinsam genutzten Speicher zwischen Cloud-Instanzen zu aktivieren, die Optimierung des Cloud-Netzwerkstapels und die effiziente Integration von Kommunikationshardware. Die praktische Arbeit endet mit Messungen und der BestĂ€tigung einer erfolgreich implementierten Simulation in einer Open-Source Cloud-Umgebung. Als Ergebnis haben wir erreicht, dass der neu erstellte Magpar-Solver zum ersten Mal in der OpenStack Cloud ausgefĂŒhrt wird, indem ivshmem fĂŒr die Shared-Memory Kommunikation verwendet wird. Umfangreiche Messungen haben auch die Wirksamkeit unserer Lösungen bewiesen und von sechzig Prozent bis zu zehnmal besseren Ergebnissen als in der Standard Cloud gefĂŒhrt

    Conserve and Protect Resources in Software-Defined Networking via the Traffic Engineering Approach

    Get PDF
    Software Defined Networking (SDN) is revolutionizing the architecture and operation of computer networks and promises a more agile and cost-efficient network management. SDN centralizes the network control logic and separates the control plane from the data plane, thus enabling flexible management of networks. A network based on SDN consists of a data plane and a control plane. To assist management of devices and data flows, a network also has an independent monitoring plane. These coexisting network planes have various types of resources, such as bandwidth utilized to transmit monitoring data, energy spent to power data forwarding devices and computational resources to control a network. Unwise management, even abusive utilization of these resources lead to the degradation of the network performance and increase the Operating Expenditure (Opex) of the network owner. Conserving and protecting limited network resources is thus among the key requirements for efficient networking. However, the heterogeneity of the network hardware and network traffic workloads expands the configuration space of SDN, making it a challenging task to operate a network efficiently. Furthermore, the existing approaches usually lack the capability to automatically adapt network configurations to handle network dynamics and diverse optimization requirements. Addtionally, a centralized SDN controller has to run in a protected environment against certain attacks. This thesis builds upon the centralized management capability of SDN, and uses cross-layer network optimizations to perform joint traffic engineering, e.g., routing, hardware and software configurations. The overall goal is to overcome the management complexities in conserving and protecting resources in multiple functional planes in SDN when facing network heterogeneities and system dynamics. This thesis presents four contributions: (1) resource-efficient network monitoring, (2) resource-efficient data forwarding, (3) using self-adaptive algorithms to improve network resource efficiency, and (4) mitigating abusive usage of resources for network controlling. The first contribution of this thesis is a resource-efficient network monitoring solution. In this thesis, we consider one specific type of virtual network management function: flow packet inspection. This type of the network monitoring application requires to duplicate packets of target flows and send them to packet monitors for in-depth analysis. To avoid the competition for resources between the original data and duplicated data, the network operators can transmit the data flows through physically (e.g., different communication mediums) or virtually (e.g., distinguished network slices) separated channels having different resource consumption properties. We propose the REMO solution, namely Resource Efficient distributed Monitoring, to reduce the overall network resource consumption incurred by both types of data, via jointly considering the locations of the packet monitors, the selection of devices forking the data packets, and flow path scheduling strategies. In the second contribution of this thesis, we investigate the resource efficiency problem in hybrid, server-centric data center networks equipped with both traditional wired connections (e.g., InfiniBand or Ethernet) and advanced high-data-rate wireless links (e.g., directional 60GHz wireless technology). The configuration space of hybrid SDN equipped with both wired and wireless communication technologies is massively large due to the complexity brought by the device heterogeneity. To tackle this problem, we present the ECAS framework to reduce the power consumption and maintain the network performance. The approaches based on the optimization models and heuristic algorithms are considered as the traditional way to reduce the operation and facility resource consumption in SDN. These approaches are either difficult to directly solve or specific for a particular problem space. As the third contribution of this thesis, we investigates the approach of using Deep Reinforcement Learning (DRL) to improve the adaptivity of the management modules for network resource and data flow scheduling. The goal of the DRL agent in the SDN network is to reduce the power consumption of SDN networks without severely degrading the network performance. The fourth contribution of this thesis is a protection mechanism based upon flow rate limiting to mitigate abusive usage of the SDN control plane resource. Due to the centralized architecture of SDN and its handling mechanism for new data flows, the network controller can be the failure point due to the crafted cyber-attacks, especially the Control-Plane- Saturation (CPS) attack. We proposes an In-Network Flow mAnagement Scheme (INFAS) to effectively reduce the generation of malicious control packets depending on the parameters configured for the proposed mitigation algorithm. In summary, the contributions of this thesis address various unique challenges to construct resource-efficient and secure SDN. This is achieved by designing and implementing novel and intelligent models and algorithms to configure networks and perform network traffic engineering, in the protected centralized network controller

    AplicaciĂłn de Big Data al anĂĄlisis, monitorizaciĂłn y seguridad de redes de comunicaciones

    Full text link
    Tesis doctoral inĂ©dita leĂ­da en la Universidad AutĂłnoma de Madrid, Escuela PolitĂ©cnica Superior, Departamento de Tecnología Electrónica y de las Comunicaciones. Fecha de lectura: 04-02-202

    Detecting cloud virtual network isolation security for data leakage

    Get PDF
    This thesis considers information leakage in cloud virtually isolated networks. Virtual Network (VN) Isolation is a core element of cloud security yet research literature shows that no experimental work, to date, has been conducted to test, discover and evaluate VN isolation data leakage. Consequently, this research focussed on that gap. Deep Dives of the cloud infrastructures were performed, followed by (Kali) penetration tests to detect any leakage. This data was compared to information gathered in the Deep Dive, to determine the level of cloud network infrastructure being exposed. As a major contribution to research, this is the first empirical work to use a Deep Dive approach and a penetration testing methodology applied to both CloudStack and OpenStack to demonstrate cloud network isolation vulnerabilities. The outcomes indicated that Cloud manufacturers need to test their isolation mechanisms more fully and enhance them with available solutions. However, this field needs more industrial data to confirm if the found issues are applicable to non-open source cloud technologies. If the problems revealed are widespread then this is a major issue for cloud security. Due to the time constraints, only two cloud testbeds were built and analysed, but many potential future works are listed for analysing more complicated VN, analysing leveraged VN plugins and testing if system complexity will cause more leakage or protect the VN. This research is one of the first empirical building blocks in the field and gives future researchers the basis for building their research on top of the presented methodology and results and for proposing more effective solutions

    Efficient routing and reconfiguration in virtualized HPC environments with vSwitch-enabled lossless networks

    No full text
    To meet the demands of communication‐intensive workloads in the cloud, virtual machines (VMs) should utilize low overhead network communication paradigms. In general, such paradigms enable VMs to directly communicate with the hardware by means of a passthrough technology like Single‐Root I/O Virtualization (SR‐IOV). However, when passthrough‐based virtualization is coupled with lossless interconnection networks, live migrations introduce scalability challenges due to the substantial network reconfiguration overhead. With these challenges in mind, we proposed a virtual switch (vSwitch) SR‐IOV architecture for InfiniBand in our previous work titled “Towards the InfiniBand SR‐IOV vSwitch Architecture”. In this paper, we first suggest solutions to rectify the space‐domain scalability issues that are present in vSwitch‐enabled subnets as a result of the VMs using dedicated layer‐two addresses. Then, we discuss routing strategies for virtualized environments using vSwitches and present a routing algorithm for Fat‐Trees. We also present a reconfiguration method that minimizes imposed reconfiguration overhead on Fat‐Trees. We perform an extensive evaluation of our prototype algorithms, and as vSwitch‐enabled hardware does not yet exist, we deduce from empirical observations by emulating vSwitches with existing hardware, as well as large‐scale simulations. Our results show significant reduction in the reconfiguration times as route recalculations can be eliminated, and for certain scenarios, the number of reconfiguration subnet management packets sent to switches is reduced from several hundred thousand down to a single one without degrading the routing quality

    Efficient routing and reconfiguration in virtualized HPC environments with vSwitch-enabled lossless networks

    No full text
    To meet the demands of communication‐intensive workloads in the cloud, virtual machines (VMs) should utilize low overhead network communication paradigms. In general, such paradigms enable VMs to directly communicate with the hardware by means of a passthrough technology like Single‐Root I/O Virtualization (SR‐IOV). However, when passthrough‐based virtualization is coupled with lossless interconnection networks, live migrations introduce scalability challenges due to the substantial network reconfiguration overhead. With these challenges in mind, we proposed a virtual switch (vSwitch) SR‐IOV architecture for InfiniBand in our previous work titled “Towards the InfiniBand SR‐IOV vSwitch Architecture”. In this paper, we first suggest solutions to rectify the space‐domain scalability issues that are present in vSwitch‐enabled subnets as a result of the VMs using dedicated layer‐two addresses. Then, we discuss routing strategies for virtualized environments using vSwitches and present a routing algorithm for Fat‐Trees. We also present a reconfiguration method that minimizes imposed reconfiguration overhead on Fat‐Trees. We perform an extensive evaluation of our prototype algorithms, and as vSwitch‐enabled hardware does not yet exist, we deduce from empirical observations by emulating vSwitches with existing hardware, as well as large‐scale simulations. Our results show significant reduction in the reconfiguration times as route recalculations can be eliminated, and for certain scenarios, the number of reconfiguration subnet management packets sent to switches is reduced from several hundred thousand down to a single one without degrading the routing quality
    corecore