40 research outputs found

    Towards effective live cloud migration on public cloud IaaS.

    Get PDF
    Cloud computing allows users to access shared, online computing resources. However, providers often offer their own proprietary applications, APIs and infrastructures, resulting in a heterogeneous cloud environment. This environment makes it difficult for users to change cloud service providers and to explore capabilities to support the automated migration from one provider to another. Many standards bodies (IEEE, NIST, DMTF and SNIA), industry (middleware) and academia have been pursuing standards and approaches to reduce the impact of vendor lock-in. Cloud providers offer their Infrastructure as a Service (IaaS) based on virtualization to enable multi-tenant and isolated environments for users. Because, each provider has its own proprietary virtual machine (VM) manager, called the hypervisor, VMs are usually tightly coupled to the underlying hardware, thus hindering live migration of VMs to different providers. A number of user-centric approaches have been proposed from both academia and industry to solve this coupling issue. However, these approaches suffer limitations in terms of flexibility (decoupling VMs from underlying hardware), performance (migration downtime) and security (secure live migration). These limitations are identified using our live cloud migration criteria which are rep- resented by flexibility, performance and security. These criteria are not only used to point out the gap in the previous approaches, but are also used to design our live cloud migration approach, LivCloud. This approach aims to live migration of VMs across various cloud IaaS with minimal migration downtime, with no extra cost and without user’s intervention and awareness. This aim has been achieved by addressing different gaps identified in the three criteria: the flexibility gap is improved by considering a better virtualization platform to support a wider hardware range, supporting various operating system and taking into account the migrated VMs’ hardware specifications and layout; the performance gap is enhanced by improving the network connectivity, providing extra resources required by the migrated VMs during the migration and predicting any potential failure to roll back the system to its initial state if required; finally, the security gap is clearly tackled by protecting the migration channel using encryption and authentication. This thesis presents: (i) A clear identification of the key challenges and factors to successfully perform live migration of VMs across different cloud IaaS. This has resulted in a rigorous comparative analysis of the literature on live migration of VMs at the cloud IaaS based on our live cloud migration criteria; (ii) A rigorous analysis to distil the limitations of existing live cloud migration approaches and how to design efficient live cloud migration using up-to-date technologies. This has led to design a novel live cloud migration approach, called LivCloud, that overcomes key limitations in currently available approaches, is designed into two stages, the basic design stage and the enhancement of the basic design stage; (iii) A systematic approach to assess LivCloud on different public cloud IaaS. This has been achieved by using a combination of up-to-date technologies to build LivCloud taking the interoperability challenge into account, implementing and discussing the results of the basic design stage on Amazon IaaS, and implementing both stages of the approach on Packet bare metal cloud. To sum up, the thesis introduces a live cloud migration approach that is systematically designed and evaluated on uncontrolled environments, Amazon and Packet bare metal. In contrast to other approaches, it clearly highlights how to perform and secure the migration between our local network and the mentioned environments

    Transport Layer solution for bulk data transfers over Heterogeneous Long Fat Networks in Next Generation Networks

    Get PDF
    Aquesta tesi per compendi centra les seves contribucions en l'aprenentatge i innovació de les Xarxes de Nova Generació. És per això que es proposen diferents contribucions en diferents àmbits (Smart Cities, Smart Grids, Smart Campus, Smart Learning, Mitjana, eHealth, Indústria 4.0 entre d'altres) mitjançant l'aplicació i combinació de diferents disciplines (Internet of Things, Building Information Modeling, Cloud Storage, Ciberseguretat, Big Data, Internet de el Futur, Transformació Digital). Concretament, es detalla el monitoratge sostenible del confort a l'Smart Campus, la que potser es la meva aportació més representativa dins de la conceptualització de Xarxes de Nova Generació. Dins d'aquest innovador concepte de monitorització s'integren diferents disciplines, per poder oferir informació sobre el nivell de confort de les persones. Aquesta investigació demostra el llarg recorregut que hi ha en la transformació digital dels sectors tradicionals i les NGNs. Durant aquest llarg aprenentatge sobre les NGN a través de les diferents investigacions, es va poder observar una problemàtica que afectava de manera transversal als diferents camps d'aplicació de les NGNs i que aquesta podia tenir una afectació en aquests sectors. Aquesta problemàtica consisteix en el baix rendiment durant l'intercanvi de grans volums de dades sobre xarxes amb gran capacitat d'ample de banda i remotament separades geogràficament, conegudes com a xarxes elefant. Concretament, això afecta al cas d'ús d'intercanvi massiu de dades entre regions Cloud (Cloud Data Sharing use case). És per això que es va estudiar aquest cas d'ús i les diferents alternatives a nivell de protocols de transport,. S'estudien les diferents problemàtiques que pateixen els protocols i s'observa per què aquests no són capaços d'arribar a rendiments òptims. Deguda a aquesta situació, s'hipotetiza que la introducció de mecanismes que analitzen les mètriques de la xarxa i que exploten eficientment la capacitat de la mateixa milloren el rendiment dels protocols de transport sobre xarxes elefant heterogènies durant l'enviament massiu de dades. Primerament, es dissenya l’Adaptative and Aggressive Transport Protocol (AATP), un protocol de transport adaptatiu i eficient amb l'objectiu de millorar el rendiment sobre aquest tipus de xarxes elefant. El protocol AATP s'implementa i es prova en un simulador de xarxes i un testbed sota diferents situacions i condicions per la seva validació. Implementat i provat amb èxit el protocol AATP, es decideix millorar el propi protocol, Enhanced-AATP, sobre xarxes elefant heterogènies. Per això, es dissenya un mecanisme basat en el Jitter Ràtio que permet fer aquesta diferenciació. A més, per tal de millorar el comportament del protocol, s’adapta el seu sistema de fairness per al repartiment just dels recursos amb altres fluxos Enhanced-AATP. Aquesta evolució s'implementa en el simulador de xarxes i es realitzen una sèrie de proves. A l'acabar aquesta tesi, es conclou que les Xarxes de Nova Generació tenen molt recorregut i moltes coses a millorar causa de la transformació digital de la societat i de l'aparició de nova tecnologia disruptiva. A més, es confirma que la introducció de mecanismes específics en la concepció i operació dels protocols de transport millora el rendiment d'aquests sobre xarxes elefant heterogènies.Esta tesis por compendio centra sus contribuciones en el aprendizaje e innovación de las Redes de Nueva Generación. Es por ello que se proponen distintas contribuciones en diferentes ámbitos (Smart Cities, Smart Grids, Smart Campus, Smart Learning, Media, eHealth, Industria 4.0 entre otros) mediante la aplicación y combinación de diferentes disciplinas (Internet of Things, Building Information Modeling, Cloud Storage, Ciberseguridad, Big Data, Internet del Futuro, Transformación Digital). Concretamente, se detalla la monitorización sostenible del confort en el Smart Campus, la que se podría considerar mi aportación más representativa dentro de la conceptualización de Redes de Nueva Generación. Dentro de este innovador concepto de monitorización se integran diferentes disciplinas, para poder ofrecer información sobre el nivel de confort de las personas. Esta investigación demuestra el recorrido que existe en la transformación digital de los sectores tradicionales y las NGNs. Durante este largo aprendizaje sobre las NGN a través de las diferentes investigaciones, se pudo observar una problemática que afectaba de manera transversal a los diferentes campos de aplicación de las NGNs y que ésta podía tener una afectación en estos sectores. Esta problemática consiste en el bajo rendimiento durante el intercambio de grandes volúmenes de datos sobre redes con gran capacidad de ancho de banda y remotamente separadas geográficamente, conocidas como redes elefante, o Long Fat Networks (LFNs). Concretamente, esto afecta al caso de uso de intercambio de datos entre regiones Cloud (Cloud Data Data use case). Es por ello que se estudió este caso de uso y las diferentes alternativas a nivel de protocolos de transporte. Se estudian las diferentes problemáticas que sufren los protocolos y se observa por qué no son capaces de alcanzar rendimientos óptimos. Debida a esta situación, se hipotetiza que la introducción de mecanismos que analizan las métricas de la red y que explotan eficientemente la capacidad de la misma mejoran el rendimiento de los protocolos de transporte sobre redes elefante heterogéneas durante el envío masivo de datos. Primeramente, se diseña el Adaptative and Aggressive Transport Protocol (AATP), un protocolo de transporte adaptativo y eficiente con el objetivo maximizar el rendimiento sobre este tipo de redes elefante. El protocolo AATP se implementa y se prueba en un simulador de redes y un testbed bajo diferentes situaciones y condiciones para su validación. Implementado y probado con éxito el protocolo AATP, se decide mejorar el propio protocolo, Enhanced-AATP, sobre redes elefante heterogéneas. Además, con tal de mejorar el comportamiento del protocolo, se mejora su sistema de fairness para el reparto justo de los recursos con otros flujos Enhanced-AATP. Esta evolución se implementa en el simulador de redes y se realizan una serie de pruebas. Al finalizar esta tesis, se concluye que las Redes de Nueva Generación tienen mucho recorrido y muchas cosas a mejorar debido a la transformación digital de la sociedad y a la aparición de nueva tecnología disruptiva. Se confirma que la introducción de mecanismos específicos en la concepción y operación de los protocolos de transporte mejora el rendimiento de estos sobre redes elefante heterogéneas.This compendium thesis focuses its contributions on the learning and innovation of the New Generation Networks. That is why different contributions are proposed in different areas (Smart Cities, Smart Grids, Smart Campus, Smart Learning, Media, eHealth, Industry 4.0, among others) through the application and combination of different disciplines (Internet of Things, Building Information Modeling, Cloud Storage, Cybersecurity, Big Data, Future Internet, Digital Transformation). Specifically, the sustainable comfort monitoring in the Smart Campus is detailed, which can be considered my most representative contribution within the conceptualization of New Generation Networks. Within this innovative monitoring concept, different disciplines are integrated in order to offer information on people's comfort levels. . This research demonstrates the long journey that exists in the digital transformation of traditional sectors and New Generation Networks. During this long learning about the NGNs through the different investigations, it was possible to observe a problematic that affected the different application fields of the NGNs in a transversal way and that, depending on the service and its requirements, it could have a critical impact on any of these sectors. This issue consists of a low performance operation during the exchange of large volumes of data on networks with high bandwidth capacity and remotely geographically separated, also known as Elephant networks, or Long Fat Networks (LFNs). Specifically, this critically affects the Cloud Data Sharing use case. That is why this use case and the different alternatives at the transport protocol level were studied. For this reason, the performance and operation problems suffered by layer 4 protocols are studied and it is observed why these traditional protocols are not capable of achieving optimal performance. Due to this situation, it is hypothesized that the introduction of mechanisms that analyze network metrics and efficiently exploit network’s capacity meliorates the performance of Transport Layer protocols over Heterogeneous Long Fat Networks during bulk data transfers. First, the Adaptive and Aggressive Transport Protocol (AATP) is designed. An adaptive and efficient transport protocol with the aim of maximizing its performance over this type of elephant network.. The AATP protocol is implemented and tested in a network simulator and a testbed under different situations and conditions for its validation. Once the AATP protocol was designed, implemented and tested successfully, it was decided to improve the protocol itself, Enhanced-AATP, to improve its performance over heterogeneous elephant networks. In addition, in order to upgrade the behavior of the protocol, its fairness system is improved for the fair distribution of resources among other Enhanced-AATP flows. Finally, this evolution is implemented in the network simulator and a set of tests are carried out. At the end of this thesis, it is concluded that the New Generation Networks have a long way to go and many things to improve due to the digital transformation of society and the appearance of brand-new disruptive technology. Furthermore, it is confirmed that the introduction of specific mechanisms in the conception and operation of transport protocols improves their performance on Heterogeneous Long Fat Networks

    Functional programming languages in computing clouds: practical and theoretical explorations

    Get PDF
    Cloud platforms must integrate three pillars: messaging, coordination of workers and data. This research investigates whether functional programming languages have any special merit when it comes to the implementation of cloud computing platforms. This thesis presents the lightweight message queue CMQ and the DSL CWMWL for the coordination of workers that we use as artefact to proof or disproof the special merit of functional programming languages in computing clouds. We have detailed the design and implementation with the broad aim to match the notions and the requirements of computing clouds. Our approach to evaluate these aims is based on evaluation criteria that are based on a series of comprehensive rationales and specifics that allow the FPL Haskell to be thoroughly analysed. We find that Haskell is excellent for use cases that do not require the distribution of the application across the boundaries of (physical or virtual) systems, but not appropriate as a whole for the development of distributed cloud based workloads that require communication with the far side and coordination of decoupled workloads. However, Haskell may be able to qualify as a suitable vehicle in the future with future developments of formal mechanisms that embrace non-determinism in the underlying distributed environments leading to applications that are anti-fragile rather than applications that insist on strict determinism that can only be guaranteed on the local system or via slow blocking communication mechanisms

    Functional programming languages in computing clouds: practical and theoretical explorations

    Get PDF
    Cloud platforms must integrate three pillars: messaging, coordination of workers and data. This research investigates whether functional programming languages have any special merit when it comes to the implementation of cloud computing platforms. This thesis presents the lightweight message queue CMQ and the DSL CWMWL for the coordination of workers that we use as artefact to proof or disproof the special merit of functional programming languages in computing clouds. We have detailed the design and implementation with the broad aim to match the notions and the requirements of computing clouds. Our approach to evaluate these aims is based on evaluation criteria that are based on a series of comprehensive rationales and specifics that allow the FPL Haskell to be thoroughly analysed. We find that Haskell is excellent for use cases that do not require the distribution of the application across the boundaries of (physical or virtual) systems, but not appropriate as a whole for the development of distributed cloud based workloads that require communication with the far side and coordination of decoupled workloads. However, Haskell may be able to qualify as a suitable vehicle in the future with future developments of formal mechanisms that embrace non-determinism in the underlying distributed environments leading to applications that are anti-fragile rather than applications that insist on strict determinism that can only be guaranteed on the local system or via slow blocking communication mechanisms

    Data-Driven Network Management for Next-Generation Wireless Networks

    Get PDF
    With the commercialization and maturity of the fifth-generation (5G) wireless networks, the next-generation wireless network (NGWN) is envisioned to provide seamless connectivity for mobile user terminals (MUTs) and to support a wide range of new applications with stringent quality of service (QoS) requirements. In the NGWN, the network architecture will be highly heterogeneous due to the integration of terrestrial networks, satellite networks, and aerial networks formed by unmanned aerial vehicles (UAVs), and the network environment becomes highly dynamic because of the mobility of MUTs and the spatiotemporal variation of service demands. In order to provide high-quality services in such dynamic and heterogeneous networks, flexible, fine-grained, and adaptive network management will be essential. Recent advancements in deep learning (DL) and digital twins (DTs) have made it possible to enable data-driven solutions to support network management in the NGWN. DL methods can solve network management problems by leveraging data instead of explicit mathematical models, and DTs can facilitate DL methods by providing extensive data based on the full digital representations created for individual MUTs. Data-driven solutions that integrates DL and DT can address complicated network management problems and explore implicit network characteristics to adapt to dynamic network environments in the NGWN. However, the design of data-driven network management solutions in the NGWN meets several technical challenges: 1) how the NGWN can be configured to support multiple services with different spatiotemporal service demands while simultaneously satisfying their different QoS requirements; 2) how the multi-dimensional network resources are proactively reserved to support MUTs with different mobility patterns in a resource-efficient manner; and 3) how the heterogeneous NGWN components, including base stations (BSs), satellites, and UAVs, jointly coordinate their network resources to support dynamic service demands, etc. In this thesis, we develop efficient data-driven network management strategies in two stages, i.e., long-term network planning and real-time network operation, to address the above challenges in the NGWN. Firstly, we investigate planning-stage network configuration to satisfy different service requirements for communication services. We consider a two-tier network with one macro BS and multiple small BSs, which supports communication services with different spatiotemporal data traffic distributions. The objective is to maximize the energy efficiency of BSs by jointly configuring downlink transmission power and communication coverage for each BS. To achieve this objective, we first design a network planning scheme with flexible binary slice zooming, dual time-scale planning, and grid-based network planning. The scheme allows flexibility to differentiate the communication coverage and downlink transmission power of the same BS for different services while improving the temporal and spatial granularity of network planning. We formulate a combinatorial optimization problem in which communication coverage management and power control are mutually dependent. To solve the problem, we propose a data-driven method with two steps: 1) we propose an unsupervised-learning-assisted approach to determine the communication coverage of BSs; and 2) we derive a closed-form solution for power control. Secondly, we investigate planning-stage resource reservation for a compute-intensive service to support MUTs with different mobility patterns. The MUTs can offload their computing tasks to the computing servers deployed at the core networks, gateways, and BSs. Each computing server requires both computing and storage resources to execute computing tasks. The objective is to optimize long-term resource reservation by jointly minimizing the usage of computing, storage, and communication resources and the cost from re-configuring resource reservation. To this end, we develop a data-driven network planning scheme with two elements, i.e., multi-resource reservation and resource reservation re-configuration. First, DTs are designed for collecting MUT status data, based on which MUTs are grouped according to their mobility patterns. Then, an optimization algorithm is proposed to customize resource reservation for different groups to satisfy their different resource demands. Last, a meta-learning-based approach is proposed to re-configure resource reservation for balancing the network resource usage and the re-configuration cost. Thirdly, we investigate operation-stage computing resource allocation in a space-air-ground integrated network (SAGIN). A UAV is deployed to fly around MUTs and collect their computing tasks, while scheduling the collected computing tasks to be processed at the UAV locally or offloaded to the nearby BSs or the remote satellite. The energy budget of the UAV, intermittent connectivity between the UAV and BSs, and dynamic computing task arrival pose challenges in computing task scheduling. The objective is to design a real-time computing task scheduling policy for minimizing the delay of computing task offloading and processing in the SAGIN. To achieve the objective, we first formulate the on-line computing scheduling in the dynamic network environment as a constrained Markov decision process. Then, we develop a risk-sensitive reinforcement learning approach in which a risk value is used to represent energy consumption that exceeds the budget. By balancing the risk value and the reward from delay minimization, the UAV can explore the task scheduling policy to minimize task offloading and processing delay while satisfying the UAV energy constraint. Extensive simulation have been conducted to demonstrate that the proposed data-driven network management approach for the NGWN can achieve flexible BS configuration for multiple communication services, fine-grained multi-dimensional resource reservation for a compute-intensive service, and adaptive computing resource allocation in the dynamic SAGIN. The schemes developed in the thesis are valuable to the data-driven network planning and operation in the NGWN

    Efficient Topology Management and Geographic Routing in High-Capacity Continental-Scale Airborne Networks

    Get PDF
    Large-scale high-capacity communication networks among mobile airborne platforms are quickly becoming a reality. Today, both Google and Facebook are seeking to form networks among high-flying balloons and drones in an effort to provide Internet connections from the stratosphere to users on the ground. This dissertation proposes an alternative, namely using the cargo and passenger aircraft already in the skies as the principal components of such a network. My work presents the design of a network architecture to overcome the challenges of managing the topology of and routing data within these continental-scale highly-dynamic networks. The architecture relies on directional communication links, such as free-space optical communication links (FSO), to achieve high data rates over long distances. However, these state-of-the-art communication systems present new networking challenges. One such challenge is that of managing the physical topology of the network. Such a topology must be explicitly managed, ensuring that each directional data link is pointed at and connected with an appropriate neighbor (which is also pointing back) to yield an acceptable global topology. To overcome this challenge, a distributed topology management framework and associated topology generation algorithms were designed, implemented, and tested via simulation. The framework is capable of managing the topology of thousands of nodes in a continental-scale airborne network and has no communication overhead except that required to exchange position information among nearby nodes. A second component of the work concerns routing data at high data rates through a constantly changing network topology. To address this issue Topology Aware Geographic Routing (TAG), a position-based routing protocol was developed that strategically uses local topology information to make better local forwarding decisions, decreasing the number of hops required to deliver a packet, when compared with other geographic routing protocols. In addition, unlike other similar protocols, TAG is able to reliably deliver packets even when the topology changes while the packet is in flight. These protocols are tested and validated in a series of simulations where nodes trace the trajectories recorded from thousands of actual flights. These simulations indicate that the topology management framework and TAG are able to perform well in large-scale high-density conditions, over long durations, and are able to support tens of thousands of 1 Mbps flows.Doctor of Philosoph

    Dynamical modelling of the human larynx in phonation

    Get PDF
    Producing an accurate model of the human voice has been the goal of researchers for a very long time, but is extremely challenging due to the complexity surrounding the way in which the voice functions. One of the more complicated aspects of modelling the voice is the fluid dynamics of the airflow, by which the process of self-oscillation of the vocal folds is sustained. This airflow also provides the only means by which the ventricular bands (two vocal fold-like structures located a short distance above the vocal folds) are driven into self-oscillation. These have been found to play a significant role in various singing styles and in voice pathologies. This study considers the airflow and flow-structure interaction in an artificial up-scaled model of the human larynx, including self-oscillating vocal folds and fixed ventricular bands. As the majority of any significant fluid-structure interaction takes place between structures found within the larynx, this thesis is limited only to examining this component of the voice organ. Particle Image Velocimetry (PIV) has been used to produce full field measurements of the flow velocity for the jet emerging from the oscillating vocal folds. An important advance in this study is the ability to observe the glottal jet from the point at which it emerges from the vocal folds, thus permitting a more complete view of the overall jet geometry within the laryngeal ventricle than in previous work. Ensemble-averaged PIV results are presented for the experimental model at different phase steps, both with and without ventricular bands, to examine their impact on the dynamics of the human larynx and the glottal jet. Finally, the three-dimensional nature of the glottal jet is considered in order to further understand and test currently held assumptions about this aspect of the jet dynamics. This was achieved by undertaking PIV in a plane perpendicular to that already considered. It is shown that the ventricular bands have an impact on the flow separation point of the glottal jet and on the deflection of the jet centreline. Furthermore, the dynamics of the vocal folds alters when ventricular bands are present, but the glottal jet is found to exhibit similar three-dimensional behaviour whether or not ventricular bands are present

    Underwater Vehicles

    Get PDF
    For the latest twenty to thirty years, a significant number of AUVs has been created for the solving of wide spectrum of scientific and applied tasks of ocean development and research. For the short time period the AUVs have shown the efficiency at performance of complex search and inspection works and opened a number of new important applications. Initially the information about AUVs had mainly review-advertising character but now more attention is paid to practical achievements, problems and systems technologies. AUVs are losing their prototype status and have become a fully operational, reliable and effective tool and modern multi-purpose AUVs represent the new class of underwater robotic objects with inherent tasks and practical applications, particular features of technology, systems structure and functional properties

    An ensemble-based computational approach for incremental learning in non-stationary environments related to schema- and scaffolding-based human learning

    Get PDF
    The principal dilemma in a learning process, whether human or computer, is adapting to new information, especially in cases where this new information conflicts with what was previously learned. The design of computer models for incremental learning is an emerging topic for classification and prediction of large-scale data streams undergoing change in underlying class distributions (definitions) over time; yet currently, they often ignore significant foundational learning theory that has been developed in the domain of human learning. This shortfall leads to many deficiencies in the ability to organize existing knowledge and to retain relevant knowledge for long periods of time. In this work, we introduce a unique computer-learning algorithm for incremental knowledge acquisition using an ensemble of classifiers, Learn++.NSE (Non-Stationary Environments), specifically for the case where the nature of knowledge to be learned is evolving. Learn++.NSE is a novel approach to evaluating and organizing existing knowledge (classifiers) according to the most recent data environment. Under this architecture, we address the learning problem at both the learner and supervisor end, discussing and implementing three main approaches: knowledge weighting/organization, forgetting prior knowledge, and change/drift detection. The framework is evaluated on a variety of canonical and real-world data streams (weather prediction, electricity price prediction, and spam detection). This study reveals the catastrophic effect of forgetting prior knowledge, supporting the organization technique proposed by Learn++.NSE as the most consistent performer during various drift scenarios, while also addressing the sheer difficulty in designing a system that strikes a balance between maintaining all knowledge and making decisions based only on relevant knowledge, especially in severe, unpredictable environments which are often encountered in the real-world
    corecore