89 research outputs found

    Virtual Machine Flow Analysis Using Host Kernel Tracing

    Get PDF
    L’infonuagique a beaucoup gagnĂ© en popularitĂ© car elle permet d’offrir des services Ă  coĂ»t rĂ©duit, avec le modĂšle Ă©conomique Pay-to-Use, un stockage illimitĂ© avec les systĂšmes de stockage distribuĂ©, et une grande puissance de calcul grĂące Ă  l’accĂšs direct au matĂ©riel. La technologie de virtualisation permet de partager un serveur physique entre plusieurs environnements virtualisĂ©s isolĂ©s, en dĂ©ployant une couche logicielle (Hyperviseur) au-dessus du matĂ©riel. En consĂ©quence, les environnements isolĂ©s peuvent fonctionner avec des systĂšmes d’exploitation et des applications diffĂ©rentes, sans interfĂ©rence mutuelle. La croissance du nombre d’utilisateurs des services infonuagiques et la dĂ©mocratisation de la technologie de virtualisation prĂ©sentent un nouveau dĂ©fi pour les fournisseurs de services infonuagiques. Fournir une bonne qualitĂ© de service et une haute disponibilitĂ© est une exigence principale pour l’infonuagique. La raison de la dĂ©gradation des performances d’une machine virtuelle peut ĂȘtre nombreuses. a ActivitĂ© intense d’une application Ă  l’intĂ©rieur de la machine virtuelle. b Conflits avec d’autres applications Ă  l’intĂ©rieur de la machine mĂȘme virtuelle. c Conflits avec d’autres machines virtuelles qui roulent sur la mĂȘme machine physique. d Échecs de la plateforme infonuagique. Les deux premiers cas peuvent ĂȘtre gĂ©rĂ©s par le propriĂ©taire de la machine virtuelle et les autres cas doivent ĂȘtre rĂ©solus par le fournisseur de l’infrastructure infonuagique. Ces infrastructures sont gĂ©nĂ©ralement trĂšs complexes et peuvent contenir diffĂ©rentes couches de virtualisation. Il est donc nĂ©cessaire d’avoir un outil d’analyse Ă  faible surcoĂ»t pour dĂ©tecter ces types de problĂšmes. Dans cette thĂšse, nous prĂ©sentons une mĂ©thode prĂ©cise permettant de rĂ©cupĂ©rer le flux d’exĂ©cution des environnements virtualisĂ©s Ă  partir de la machine hĂŽte, quel que soit le niveau de la virtualisation. Pour Ă©viter des problĂšmes de sĂ©curitĂ©, faciliter le dĂ©ploiement et minimiser le surcoĂ»t, notre mĂ©thode limite la collecte de donnĂ©es au niveau de l’hyperviseur. Pour analyser le comportement des machines virtuelles, nous utilisons un outil de traçage lĂ©ger appelĂ© Linux Trace Toolkit Next Generation (LTTng) [1]. LTTng est capable d’effectuer un traçage Ă  haut dĂ©bit et Ă  faible surcoĂ»t, grĂące aux mĂ©canismes de synchronisation sans verrous utilisĂ©s pour mettre Ă  jour le contenu des tampons de traçage.----------ABSTRACT: Cloud computing has gained popularity as it offers services at lower cost, with Pay-per-Use model, unlimited storage, with distributed storage, and flexible computational power, with direct hardware access. Virtualization technology allows to share a physical server, between several isolated virtualized environments, by deploying an hypervisor layer on top of hardware. As a result, each isolated environment can run with its OS and application without mutual interference. With the growth of cloud usage, and the use of virtualization, performance understanding and debugging are becoming a serious challenge for Cloud providers. Offering a better QoS and high availability are expected to be salient features of cloud computing. Nonetheless, possible reasons behind performance degradation in VMs are numerous. a) Heavy load of an application inside the VM. b) Contention with other applications inside the VM. c) Contention with other co-located VMs. d) Cloud platform failures. The first two cases can be managed by the VM owner, while the other cases need to be solved by the infrastructure provider. One key requirement for such a complex environment, with different virtualization layers, is a precise low overhead analysis tool. In this thesis, we present a host-based, precise method to recover the execution flow of virtualized environments, regardless of the level of nested virtualization. To avoid security issues, ease deployment and reduce execution overhead, our method limits its data collection to the hypervisor level. In order to analyse the behavior of each VM, we use a lightweight tracing tool called the Linux Trace Toolkit Next Generation (LTTng) [1]. LTTng is optimised for high throughput tracing with low overhead, thanks to its lock-free synchronization mechanisms used to update the trace buffer content

    Robust and secure monitoring and attribution of malicious behaviors

    Get PDF
    Worldwide computer systems continue to execute malicious software that degrades the systemsĂą performance and consumes network capacity by generating high volumes of unwanted traffic. Network-based detectors can effectively identify machines participating in the ongoing attacks by monitoring the traffic to and from the systems. But, network detection alone is not enough; it does not improve the operation of the Internet or the health of other machines connected to the network. We must identify malicious code running on infected systems, participating in global attack networks. This dissertation describes a robust and secure approach that identifies malware present on infected systems based on its undesirable use of network. Our approach, using virtualization, attributes malicious traffic to host-level processes responsible for the traffic. The attribution identifies on-host processes, but malware instances often exhibit parasitic behaviors to subvert the execution of benign processes. We then augment the attribution software with a host-level monitor that detects parasitic behaviors occurring at the user- and kernel-level. User-level parasitic attack detection happens via the system-call interface because it is a non-bypassable interface for user-level processes. Due to the unavailability of one such interface inside the kernel for drivers, we create a new driver monitoring interface inside the kernel to detect parasitic attacks occurring through this interface. Our attribution software relies on a guest kernelĂą s data to identify on-host processes. To allow secure attribution, we prevent illegal modifications of critical kernel data from kernel-level malware. Together, our contributions produce a unified research outcome --an improved malicious code identification system for user- and kernel-level malware.Ph.D.Committee Chair: Giffin, Jonathon; Committee Member: Ahamad, Mustaque; Committee Member: Blough, Douglas; Committee Member: Lee, Wenke; Committee Member: Traynor, Patric

    Dynamic reliability and security monitoring: a virtual machine approach

    Get PDF
    While one always works to prevent attacks and failures, they are inevitable and situational awareness is key to taking appropriate action. Monitoring plays an integral role in ensuring reliability and security of computing systems. Infrastructure as a Service (IaaS) clouds significantly lower the barrier for obtaining scalable computing resources and allow users to focus on what is important to them. Can a similar service be offered to provide on-demand reliability and security monitoring? Cloud computing systems are typically built using virtual machines (VMs). VM monitoring takes advantage of this and uses the hypervisor that runs VMs for robust reliability and security monitoring. The hypervisor provides an environment that is isolated from failures and attacks inside customers’ VMs. Furthermore, as a low-level manager of computing resources, the hypervisor has full access to the infrastructure running above it. Hypervisor-based VM monitoring leverages that information to observe the VMs for failures and attacks. However, existing VM monitoring techniques fall short of “as-a-service” expectations because they require a priori VM modifications and require human interaction to obtain necessary information about the underlying guest system. The research presented in this dissertation closes those gaps by providing a flexible VM monitoring framework and automated analysis to support that framework. We have developed and tested a dynamic VM monitoring framework called Hypervisor Probes (hprobes). The hprobe framework allows us to monitor the execution of both the guest OS and applications from the hypervisor. To supplement this monitoring framework, we use dynamic analysis techniques to investigate the relationship between hardware events visible to the hyper-visor and OS constructs common across OS versions. We use the results of this analysis to parametrize the hprobe-based monitors without requiring any user input. Combining the dynamic VM monitoring framework and analysis frameworks allows us to provide on-demand hypervisor based monitors for cloud VMs

    VALE, a switched ethernet for virtual machines

    Full text link
    With the use of virtual machines becoming more and more popular, the need for high performance communication between them also grows. Past solutions have seen the use of hardware assistance, in the form of “PCI passthrough”(dedicating parts of physical NICs to each virtual machine) and even bouncing traffic through physical switches to handle data forwarding and replication. In this paper we show that, with a proper design, very high speed communication between virtual machines can be implemented completely in ..

    Live migration of virtual machine and container based mobile core network components: A comprehensive study

    Get PDF
    With the increasing demand for openness, flexibility, and monetization, the Network Function Virtualization (NFV) of mobile network functions has become the embracing factor for most mobile network operators. Early reported field deployments of virtualized Evolved Packet Core (EPC) - the core network (CN) component of 4G LTE and 5G non-standalone mobile networks - reflect this growing trend. To best meet the requirements of power management, load balancing, and fault tolerance in the cloud environment, the need for live migration of these virtualized components cannot be shunned. Virtualization platforms of interest include both Virtual Machines (VMs) and Containers, with the latter option offering more lightweight characteristics. This paper's first contribution is the proposal of a framework that enables migration of containerised virtual EPC components using an open-source migration solution which does not fully support the mobile network protocol stack yet. The second contribution is an experimental-based comprehensive analysis of live migration in two virtualization technologies - VM and Container - with the additional scrutinization on the container migration approach. The presented experimental comparison accounts for several system parameters and configurations: flavor (image) size, network characteristics, processor hardware architecture model, and the CPU load of the backhaul network components. The comparison reveals that the live migration completion time and also the end-user service interruption time of the virtualized EPC components is reduced approximately by 70% in the container platform when using the proposed framework.This work was supported in part by the NSF under Grant CNS-1405405, Grant CNS-1409849, Grant ACI-1541461, and Grant CNS-1531039T; and in part by the EU Commission through the 5GROWTH Project under Grant 856709

    Fog-supported delay-constrained energy-saving live migration of VMs over multiPath TCP/IP 5G connections

    Get PDF
    The incoming era of the fifth-generation fog computing-supported radio access networks (shortly, 5G FOGRANs) aims at exploiting computing/networking resource virtualization, in order to augment the limited resources of wireless devices through the seamless live migration of virtual machines (VMs) toward nearby fog data centers. For this purpose, the bandwidths of the multiple wireless network interface cards of the wireless devices may be aggregated under the control of the emerging MultiPathTCP (MPTCP) protocol. However, due to the fading and mobility-induced phenomena, the energy consumptions of the current state-of-the-art VM migration techniques may still offset their expected benefits. Motivated by these considerations, in this paper, we analytically characterize and implement in software and numerically test the optimal minimum-energy settable-complexity bandwidth manager (SCBM) for the live migration of VMs over 5G FOGRAN MPTCP connections. The key features of the proposed SCBM are that: 1) its implementation complexity is settable on-line on the basis of the target energy consumption versus implementation complexity tradeoff; 2) it minimizes the network energy consumed by the wireless device for sustaining the migration process under hard constraints on the tolerated migration times and downtimes; and 3) by leveraging a suitably designed adaptive mechanism, it is capable to quickly react to (possibly, unpredicted) fading and/or mobility-induced abrupt changes of the wireless environment without requiring forecasting. The actual effectiveness of the proposed SCBM is supported by extensive energy versus delay performance comparisons that cover: 1) a number of heterogeneous 3G/4G/WiFi FOGRAN scenarios; 2) synthetic and real-world workloads; and, 3) MPTCP and wireless connections

    Strong Temporal Isolation among Containers in OpenStack for NFV Services

    Get PDF
    In this paper, the problem of temporal isolation among containerized software components running in shared cloud infrastructures is tackled, proposing an approach based on hierarchical real-time CPU scheduling. This allows for reserving a precise share of the available computing power for each container deployed in a multi-core server, so to provide it with a stable performance, independently from the load of other co-located containers. The proposed technique enables the use of reliable modeling techniques for end-to-end service chains that are effective in controlling the application-level performance. An implementation of the technique within the well-known OpenStack cloud orchestration software is presented, focusing on a use-case framed in the context of network function virtualization. The modified OpenStack is capable of leveraging the special real-time scheduling features made available in the underlying Linux operating system through a patch to the in-kernel process scheduler. The effectiveness of the technique is validated by gathering performance data from two applications running in a real test-bed with the mentioned modifications to OpenStack and the Linux kernel. A performance model is developed that tightly models the application behavior under a variety of conditions. Extensive experimentation shows that the proposed mechanism is successful in guaranteeing isolation of individual containerized activities on the platform

    QoS-aware architectures, technologies, and middleware for the cloud continuum

    Get PDF
    The recent trend of moving Cloud Computing capabilities to the Edge of the network is reshaping how applications and their middleware supports are designed, deployed, and operated. This new model envisions a continuum of virtual resources between the traditional cloud and the network edge, which is potentially more suitable to meet the heterogeneous Quality of Service (QoS) requirements of diverse application domains and next-generation applications. Several classes of advanced Internet of Things (IoT) applications, e.g., in the industrial manufacturing domain, are expected to serve a wide range of applications with heterogeneous QoS requirements and call for QoS management systems to guarantee/control performance indicators, even in the presence of real-world factors such as limited bandwidth and concurrent virtual resource utilization. The present dissertation proposes a comprehensive QoS-aware architecture that addresses the challenges of integrating cloud infrastructure with edge nodes in IoT applications. The architecture provides end-to-end QoS support by incorporating several components for managing physical and virtual resources. The proposed architecture features: i) a multilevel middleware for resolving the convergence between Operational Technology (OT) and Information Technology (IT), ii) an end-to-end QoS management approach compliant with the Time-Sensitive Networking (TSN) standard, iii) new approaches for virtualized network environments, such as running TSN-based applications under Ultra-low Latency (ULL) constraints in virtual and 5G environments, and iv) an accelerated and deterministic container overlay network architecture. Additionally, the QoS-aware architecture includes two novel middlewares: i) a middleware that transparently integrates multiple acceleration technologies in heterogeneous Edge contexts and ii) a QoS-aware middleware for Serverless platforms that leverages coordination of various QoS mechanisms and virtualized Function-as-a-Service (FaaS) invocation stack to manage end-to-end QoS metrics. Finally, all architecture components were tested and evaluated by leveraging realistic testbeds, demonstrating the efficacy of the proposed solutions
    • 

    corecore