123 research outputs found

    Dynamic load balancing based on live migration of virtual machines: Security threats and effects

    Get PDF
    Live migration of virtual machines (VMs) is the process of transitioning a VM from one virtual machine monitor (VMM) to another without halting the guest operating system, often between distinct physical machines, has opened new opportunities in computing. It allows a clean separation between hardware and software, and facilitates fault management, load balancing, and low-level system maintenance. Implemented by several existing virtualization products, live migration also aids in aspects such as high availability services, transparent mobility and consolidated management. While virtualization and live migration enable important new functionality, the combination introduces novel security challenges. A virtual machine monitor that incorporates a vulnerable implementation of live migration functionality may expose both the guest and host operating system to attack and result in a compromise of integrity. Given the large and increasing market for virtualization technology, a comprehensive understanding of virtual machine migration security is essential. So the main idea behind this thesis is to create a test environment that is suitable for experimenting and analyzing the security implications in case of exploitation of Live Migration of Virtual Machines. Using Live VM migration for dynamic load balancing or scheduling, this study determines workload hotspots in physical environment and through use of effective Live Migration process; tries to carry out resource profiling. By carrying out effective profiling, this thesis research is able to determine how much of each resource needs to be allocated to a VM. To understand exactly why process migration would not work in such scenarios and better understand Live VM Migration, this thesis tries to provide requisite incites as to which model is most appropriate for automatic load balancing for virtual machine infrastructure based on resource consumption. The security implications of exploiting the process of migration may end in unexpected results or results that are not noticeable. The scope of this thesis research is identifying these results and the causes for them

    Data-Driven Methods for Data Center Operations Support

    Get PDF
    During the last decade, cloud technologies have been evolving at an impressive pace, such that we are now living in a cloud-native era where developers can leverage on an unprecedented landscape of (possibly managed) services for orchestration, compute, storage, load-balancing, monitoring, etc. The possibility to have on-demand access to a diverse set of configurable virtualized resources allows for building more elastic, flexible and highly-resilient distributed applications. Behind the scenes, cloud providers sustain the heavy burden of maintaining the underlying infrastructures, consisting in large-scale distributed systems, partitioned and replicated among many geographically dislocated data centers to guarantee scalability, robustness to failures, high availability and low latency. The larger the scale, the more cloud providers have to deal with complex interactions among the various components, such that monitoring, diagnosing and troubleshooting issues become incredibly daunting tasks. To keep up with these challenges, development and operations practices have undergone significant transformations, especially in terms of improving the automations that make releasing new software, and responding to unforeseen issues, faster and sustainable at scale. The resulting paradigm is nowadays referred to as DevOps. However, while such automations can be very sophisticated, traditional DevOps practices fundamentally rely on reactive mechanisms, that typically require careful manual tuning and supervision from human experts. To minimize the risk of outages—and the related costs—it is crucial to provide DevOps teams with suitable tools that can enable a proactive approach to data center operations. This work presents a comprehensive data-driven framework to address the most relevant problems that can be experienced in large-scale distributed cloud infrastructures. These environments are indeed characterized by a very large availability of diverse data, collected at each level of the stack, such as: time-series (e.g., physical host measurements, virtual machine or container metrics, networking components logs, application KPIs); graphs (e.g., network topologies, fault graphs reporting dependencies among hardware and software components, performance issues propagation networks); and text (e.g., source code, system logs, version control system history, code review feedbacks). Such data are also typically updated with relatively high frequency, and subject to distribution drifts caused by continuous configuration changes to the underlying infrastructure. In such a highly dynamic scenario, traditional model-driven approaches alone may be inadequate at capturing the complexity of the interactions among system components. DevOps teams would certainly benefit from having robust data-driven methods to support their decisions based on historical information. For instance, effective anomaly detection capabilities may also help in conducting more precise and efficient root-cause analysis. Also, leveraging on accurate forecasting and intelligent control strategies would improve resource management. Given their ability to deal with high-dimensional, complex data, Deep Learning-based methods are the most straightforward option for the realization of the aforementioned support tools. On the other hand, because of their complexity, this kind of models often requires huge processing power, and suitable hardware, to be operated effectively at scale. These aspects must be carefully addressed when applying such methods in the context of data center operations. Automated operations approaches must be dependable and cost-efficient, not to degrade the services they are built to improve. i

    Fog Computing

    Get PDF
    Everything that is not a computer, in the traditional sense, is being connected to the Internet. These devices are also referred to as the Internet of Things and they are pressuring the current network infrastructure. Not all devices are intensive data producers and part of them can be used beyond their original intent by sharing their computational resources. The combination of those two factors can be used either to perform insight over the data closer where is originated or extend into new services by making available computational resources, but not exclusively, at the edge of the network. Fog computing is a new computational paradigm that provides those devices a new form of cloud at a closer distance where IoT and other devices with connectivity capabilities can offload computation. In this dissertation, we have explored the fog computing paradigm, and also comparing with other paradigms, namely cloud, and edge computing. Then, we propose a novel architecture that can be used to form or be part of this new paradigm. The implementation was tested on two types of applications. The first application had the main objective of demonstrating the correctness of the implementation while the other application, had the goal of validating the characteristics of fog computing.Tudo o que não é um computador, no sentido tradicional, está sendo conectado à Internet. Esses dispositivos também são chamados de Internet das Coisas e estão pressionando a infraestrutura de rede atual. Nem todos os dispositivos são produtores intensivos de dados e parte deles pode ser usada além de sua intenção original, compartilhando seus recursos computacionais. A combinação desses dois fatores pode ser usada para realizar processamento dos dados mais próximos de onde são originados ou estender para a criação de novos serviços, disponibilizando recursos computacionais periféricos à rede. Fog computing é um novo paradigma computacional que fornece a esses dispositivos uma nova forma de nuvem a uma distância mais próxima, onde “Things” e outros dispositivos com recursos de conectividade possam delegar processamento. Nesta dissertação, exploramos fog computing e também comparamos com outros paradigmas, nomeadamente cloud e edge computing. Em seguida, propomos uma nova arquitetura que pode ser usada para formar ou fazer parte desse novo paradigma. A implementação foi testada em dois tipos de aplicativos. A primeira aplicação teve o objetivo principal de demonstrar a correção da implementação, enquanto a outra aplicação, teve como objetivo validar as características de fog computing

    Holistic Resource Management for Sustainable and Reliable Cloud Computing:An Innovative Solution to Global Challenge

    Get PDF
    Minimizing the energy consumption of servers within cloud computing systems is of upmost importance to cloud providers towards reducing operational costs and enhancing service sustainability by consolidating services onto fewer active servers. Moreover, providers must also provision high levels of availability and reliability, hence cloud services are frequently replicated across servers that subsequently increases server energy consumption and resource overhead. These two objectives can present a potential conflict within cloud resource management decision making that must balance between service consolidation and replication to minimize energy consumption whilst maximizing server availability and reliability, respectively. In this paper, we propose a cuckoo optimization-based energy-reliability aware resource scheduling technique (CRUZE) for holistic management of cloud computing resources including servers, networks, storage, and cooling systems. CRUZE clusters and executes heterogeneous workloads on provisioned cloud resources and enhances the energy-efficiency and reduces the carbon footprint in datacenters without adversely affecting cloud service reliability. We evaluate the effectiveness of CRUZE against existing state-of-the-art solutions using the CloudSim toolkit. Results indicate that our proposed technique is capable of reducing energy consumption by 20.1% whilst improving reliability and CPU utilization by 17.1% and 15.7% respectively without affecting other Quality of Service parameters

    Secure Communication in Disaster Scenarios

    Get PDF
    Während Naturkatastrophen oder terroristischer Anschläge ist die bestehende Kommunikationsinfrastruktur häufig überlastet oder fällt komplett aus. In diesen Situationen können mobile Geräte mithilfe von drahtloser ad-hoc- und unterbrechungstoleranter Vernetzung miteinander verbunden werden, um ein Notfall-Kommunikationssystem für Zivilisten und Rettungsdienste einzurichten. Falls verfügbar, kann eine Verbindung zu Cloud-Diensten im Internet eine wertvolle Hilfe im Krisen- und Katastrophenmanagement sein. Solche Kommunikationssysteme bergen jedoch ernsthafte Sicherheitsrisiken, da Angreifer versuchen könnten, vertrauliche Daten zu stehlen, gefälschte Benachrichtigungen von Notfalldiensten einzuspeisen oder Denial-of-Service (DoS) Angriffe durchzuführen. Diese Dissertation schlägt neue Ansätze zur Kommunikation in Notfallnetzen von mobilen Geräten vor, die von der Kommunikation zwischen Mobilfunkgeräten bis zu Cloud-Diensten auf Servern im Internet reichen. Durch die Nutzung dieser Ansätze werden die Sicherheit der Geräte-zu-Geräte-Kommunikation, die Sicherheit von Notfall-Apps auf mobilen Geräten und die Sicherheit von Server-Systemen für Cloud-Dienste verbessert

    Design Space Exploration and Resource Management of Multi/Many-Core Systems

    Get PDF
    The increasing demand of processing a higher number of applications and related data on computing platforms has resulted in reliance on multi-/many-core chips as they facilitate parallel processing. However, there is a desire for these platforms to be energy-efficient and reliable, and they need to perform secure computations for the interest of the whole community. This book provides perspectives on the aforementioned aspects from leading researchers in terms of state-of-the-art contributions and upcoming trends

    Strategies of development and maintenance in supervision, control, synchronization, data acquisition and processing in light sources

    Get PDF
    Programa Oficial de Doutoramento en Tecnoloxías da Información e as Comunicacións. 5032V01[Resumo] Os aceleradores de partículas e fontes de luz sincrotrón, evolucionan constantemente para estar na vangarda da tecnoloxía, levando os límites cada vez mais lonxe para explorar novos dominios e universos. Os sistemas de control son unha parte crucial desas instalacións científicas e buscan logra-la flexibilidade de manobra para poder facer experimentos moi variados, con configuracións diferentes que engloban moitos tipos de detectores, procedementos, mostras a estudar e contornas. As propostas de experimento son cada vez máis ambiciosas e van sempre un paso por diante do establecido. Precísanse detectores cada volta máis rápidos e eficientes, con máis ancho de banda e con máis resolución. Tamén é importante a operación simultánea de varios detectores tanto escalares como mono ou bidimensionáis, con mecanismos de sincronización de precisión que integren as singularidades de cada un. Este traballo estuda as solucións existentes no campo dos sistemas de control e adquisición de datos nos aceleradores de partículas e fontes de luz e raios X, ó tempo que explora novos requisitos e retos no que respecta á sincronización e velocidade de adquisición de datos para novos experimentos, a optimización do deseño, soporte, xestión de servizos e custos de operación. Tamén se estudan diferentes solucións adaptadas a cada contorna.[Resumen] Los aceleradores de partículas y fuentes de luz sincrotrón, evolucionan constantemente para estar en la vanguardia de la tecnología, y poder explorar nuevos dominios. Los sistemas de control son una parte fundamental de esas instalaciones científicas y buscan lograr la máxima flexibilidad para poder llevar a cabo experimentos más variados, con configuraciones diferentes que engloban varios tipos de detectores, procedimientos, muestras a estudiar y entornos. Los experimentos se proponen cada vez más ambiciosos y en ocasiones más allá de los límites establecidos. Se necesitan detectores cada vez más rápidos y eficientes, con más resolución y ancho de banda, que puedan sincronizarse simultáneamente con otros detectores tanto escalares como mono y bidimensionales, integrando las singularidades de cada uno y homogeneizando la adquisición de datos. Este trabajo estudia los sistemas de control y adquisición de datos de aceleradores de partículas y fuentes de luz y rayos X, y explora nuevos requisitos y retos en lo que respecta a la sincronización y velocidad de adquisición de datos, optimización y costo-eficiencia en el diseño, operación soporte, mantenimiento y gestión de servicios. También se estudian diferentes soluciones adaptadas a cada entorno.[Abstract] Particle accelerators and photon sources are constantly evolving, attaining the cutting-edge technologies to push the limits forward and explore new domains. The control systems are a crucial part of these installations and are required to provide flexible solutions to the new challenging experiments, with different kinds of detectors, setups, sample environments and procedures. Experiment proposals are more and more ambitious at each call and go often a step beyond the capabilities of the instrumentation. Detectors shall be faster, with higher efficiency, more resolution, more bandwidth and able to synchronize with other detectors of all kinds; scalars, one or two-dimensional, taking into account their singularities and homogenizing the data acquisition. This work examines the control and data acquisition systems for particle accelerators and X- ray / light sources and explores new requirements and challenges regarding synchronization and data acquisition bandwidth, optimization and cost-efficiency in the design / operation / support. It also studies different solutions depending on the environment
    corecore