41 research outputs found

    Technical analysis of content placement algorithms for content delivery network in cloud

    Get PDF
    Content placement algorithm is an integral part of the cloud-based content de-livery network. They are responsible for selecting a precise content to be re-posited over the surrogate servers distributed over a geographical region. Although various works are being already carried out in this sector, there are loopholes connected to most of the work, which doesn't have much disclosure. It is already known that quality of service, quality of experience, and the cost is always an essential objective targeting to be improved in existing work. Still, there are various other aspects and underlying reasons which are equally important from the design aspect. Therefore, this paper contributes towards reviewing the existing approaches of content placement algorithm over cloud-based content delivery network targeting to explore open-end re-search issues

    QoS-aware service continuity in the virtualized edge

    Get PDF
    5G systems are envisioned to support numerous delay-sensitive applications such as the tactile Internet, mobile gaming, and augmented reality. Such applications impose new demands on service providers in terms of the quality of service (QoS) provided to the end-users. Achieving these demands in mobile 5G-enabled networks represent a technical and administrative challenge. One of the solutions proposed is to provide cloud computing capabilities at the edge of the network. In such vision, services are cloudified and encapsulated within the virtual machines or containers placed in cloud hosts at the network access layer. To enable ultrashort processing times and immediate service response, fast instantiation, and migration of service instances between edge nodes are mandatory to cope with the consequences of user’s mobility. This paper surveys the techniques proposed for service migration at the edge of the network. We focus on QoS-aware service instantiation and migration approaches, comparing the mechanisms followed and emphasizing their advantages and disadvantages. Then, we highlight the open research challenges still left unhandled.publishe

    Elastic Highly Available Cloud Computing

    Get PDF
    High availability and elasticity are two the cloud computing services technical features. Elasticity is a key feature of cloud computing where provisioning of resources is closely tied to the runtime demand. High availability assure that cloud applications are resilient to failures. Existing cloud solutions focus on providing both features at the level of the virtual resource through virtual machines by managing their restart, addition, and removal as needed. These existing solutions map applications to a specific design, which is not suitable for many applications especially virtualized telecommunication applications that are required to meet carrier grade standards. Carrier grade applications typically rely on the underlying platform to manage their availability by monitoring heartbeats, executing recoveries, and attempting repairs to bring the system back to normal. Migrating such applications to the cloud can be particularly challenging, especially if the elasticity policies target the application only, without considering the underlying platform contributing to its high availability (HA). In this thesis, a Network Function Virtualization (NFV) framework is introduced; the challenges and requirements of its use in mobile networks are discussed. In particular, an architecture for NFV framework entities in the virtual environment is proposed. In order to reduce signaling traffic congestion and achieve better performance, a criterion to bundle multiple functions of virtualized evolved packet-core in a single physical device or a group of adjacent devices is proposed. The analysis shows that the proposed grouping can reduce the network control traffic by 70 percent. Moreover, a comprehensive framework for the elasticity of highly available applications that considers the elastic deployment of the platform and the HA placement of the application’s components is proposed. The approach is applied to an internet protocol multimedia subsystem (IMS) application and demonstrate how, within a matter of seconds, the IMS application can be scaled up while maintaining its HA status

    Resource provisioning and scheduling algorithms for hybrid workflows in edge cloud computing

    Get PDF
    In recent years, Internet of Things (IoT) technology has been involved in a wide range of application domains to provide real-time monitoring, tracking and analysis services. The worldwide number of IoT-connected devices is projected to increase to 43 billion by 2023, and IoT technologies are expected to engaged in 25% of business sector. Latency-sensitive applications in scope of intelligent video surveillance, smart home, autonomous vehicle, augmented reality, are all emergent research directions in industry and academia. These applications are required connecting large number of sensing devices to attain the desired level of service quality for decision accuracy in a sensitive timely manner. Moreover, continuous data stream imposes processing large amounts of data, which adds a huge overhead on computing and network resources. Thus, latency-sensitive and resource-intensive applications introduce new challenges for current computing models, i.e, batch and stream. In this thesis, we refer to the integrated application model of stream and batch applications as a hybrid work ow model. The main challenge of the hybrid model is achieving the quality of service (QoS) requirements of the two computation systems. This thesis provides a systemic and detailed modeling for hybrid workflows which describes the internal structure of each application type for purposes of resource estimation, model systems tuning, and cost modeling. For optimizing the execution of hybrid workflows, this thesis proposes algorithms, techniques and frameworks to serve resource provisioning and task scheduling on various computing systems including cloud, edge cloud and cooperative edge cloud. Overall, experimental results provided in this thesis demonstrated strong evidences on the responsibility of proposing different understanding and vision on the applications of integrating stream and batch applications, and how edge computing and other emergent technologies like 5G networks and IoT will contribute on more sophisticated and intelligent solutions in many life disciplines for more safe, secure, healthy, smart and sustainable society

    Data Replication and Its Alignment with Fault Management in the Cloud Environment

    Get PDF
    Nowadays, the exponential data growth becomes one of the major challenges all over the world. It may cause a series of negative impacts such as network overloading, high system complexity, and inadequate data security, etc. Cloud computing is developed to construct a novel paradigm to alleviate massive data processing challenges with its on-demand services and distributed architecture. Data replication has been proposed to strategically distribute the data access load to multiple cloud data centres by creating multiple data copies at multiple cloud data centres. A replica-applied cloud environment not only achieves a decrease in response time, an increase in data availability, and more balanced resource load but also protects the cloud environment against the upcoming faults. The reactive fault tolerance strategy is also required to handle the faults when the faults already occurred. As a result, the data replication strategies should be aligned with the reactive fault tolerance strategies to achieve a complete management chain in the cloud environment. In this thesis, a data replication and fault management framework is proposed to establish a decentralised overarching management to the cloud environment. Three data replication strategies are firstly proposed based on this framework. A replica creation strategy is proposed to reduce the total cost by jointly considering the data dependency and the access frequency in the replica creation decision making process. Besides, a cloud map oriented and cost efficiency driven replica creation strategy is proposed to achieve the optimal cost reduction per replica in the cloud environment. The local data relationship and the remote data relationship are further analysed by creating two novel data dependency types, Within-DataCentre Data Dependency and Between-DataCentre Data Dependency, according to the data location. Furthermore, a network performance based replica selection strategy is proposed to avoid potential network overloading problems and to increase the number of concurrent-running instances at the same time

    On Personal Storage Systems: Architecture and Design Considerations

    Get PDF
    Actualment, els usuaris necessiten grans quantitats d’espai d’emmagatzematge remot per guardar la seva informació personal. En aquesta dissertació, estudiarem dues arquitectures emergents de sistemes d’emmagatzematge d’informació personal: els Núvols Personals (centralitzats) i els sistemes d’emmagatzematge social (descentralitzats). A la Part I d'aquesta tesi, contribuïm desvelant l’operació interna d’un Núvol Personal d’escala global, anomenat UbuntuOne (U1), incloent-hi la seva arquitectura, el seu servei de metadades i les interaccions d’emmagatzematge de dades. A més, proporcionem una anàlisi de la part de servidor d’U1 on estudiem la càrrega del sistema, el comportament dels usuaris i el rendiment del seu servei de metadades. També suggerim tota una sèrie de millores potencials al sistema que poden beneficiar sistemes similars. D'altra banda, en aquesta tesi també contribuïm mesurant i analitzant la qualitat de servei (p.e., velocitat, variabilitat) de les transferències sobre les REST APIs oferides pels Núvols Personals. A més, durant aquest estudi, ens hem adonat que aquestes interfícies poden ser objecte d’abús quan són utilitzades sobre els comptes gratuïts que normalment ofereixen aquests serveis. Això ha motivat l’estudi d’aquesta vulnerabilitat, així com de potencials contramesures. A la Part II d'aquesta dissertació, la nostra primera contribució és analitzar la qualitat de servei que els sistemes d’emmagatzematge social poden proporcionar en termes de disponibilitat de dades, velocitat de transferència i balanceig de la càrrega. El nostre interès principal és entendre com fenòmens intrínsecs, com les dinàmiques de connexió dels usuaris o l’estructura de la xarxa social, limiten el rendiment d’aquests sistemes. També proposem nous mecanismes de manegament de dades per millorar aquestes limitacions. Finalment, dissenyem una arquitectura híbrida que combina recursos del Núvol i dels usuaris. Aquesta arquitectura té com a objectiu millorar la qualitat de servei del sistema i deixa als usuaris decidir la quantitat de recursos utilitzats del Núvol, o en altres paraules, és una decisió entre control de les seves dades i rendiment.Los usuarios cada vez necesitan espacios mayores de almacenamiento en línea para guardar su información personal. Este reto motiva a los investigadores a diseñar y evaluar nuevas infraestructuras de almacenamiento de datos personales. En esta tesis, nos centramos en dos arquitecturas emergentes de almacenamiento de datos personales: las Nubes Personales (centralización) y los sistemas de almacenamiento social (descentralización). Creemos que, pese a su creciente popularidad, estos sistemas requieren de un mayor estudio científico. En la Parte I de esta disertación, examinamos aspectos referentes a la operación interna y el rendimiento de varias Nubes Personales. Concretamente, nuestra primera contribución es desvelar la operación interna e infraestructura de una Nube Personal de gran escala (UbuntuOne, U1). Además, proporcionamos un estudio de la actividad interna de U1 que incluye la carga diaria soportada, el comportamiento de los usuarios y el rendimiento de su sistema de metadatos. También sugerimos mejoras sobre U1 que pueden ser de utilidad en sistemas similares. Por otra parte, en esta tesis medimos y caracterizamos el rendimiento del servicio de REST APIs ofrecido por varias Nubes Personales (velocidad de transferencia, variabilidad, etc.). También demostramos que la combinación de REST APIs sobre cuentas gratuitas de usuario puede dar lugar a abusos por parte de usuarios malintencionados. Esto nos motiva a proponer mecanismos para limitar el impacto de esta vulnerabilidad. En la Parte II de esta tesis, estudiamos la calidad de servicio que pueden ofrecer los sistemas de almacenamiento social en términos de disponibilidad de datos, balanceo de carga y tiempos de transferencia. Nuestro interés principal es entender la manera en que fenómenos intrínsecos, como las dinámicas de conexión de los usuarios o la estructura de su red social, limitan el rendimiento de estos sistemas. También proponemos nuevos mecanismos de gestión de datos para mejorar esas limitaciones. Finalmente, diseñamos y evaluamos una arquitectura híbrida para mejorar la calidad de servicio de los sistemas de almacenamiento social que combina recursos de usuarios y de la Nube. Esta arquitectura permite al usuario decidir su equilibrio entre control de sus datos y rendimiento.Increasingly, end-users demand larger amounts of online storage space to store their personal information. This challenge motivates researchers to devise novel personal storage infrastructures. In this thesis, we focus on two popular personal storage architectures: Personal Clouds (centralized) and social storage systems (decentralized). In our view, despite their growing popularity among users and researchers, there still remain some critical aspects to address regarding these systems. In the Part I of this dissertation, we examine various aspects of the internal operation and performance of various Personal Clouds. Concretely, we first contribute by unveiling the internal structure of a global-scale Personal Cloud, namely UbuntuOne (U1). Moreover, we provide a back-end analysis of U1 that includes the study of the storage workload, the user behavior and the performance of the U1 metadata store. We also suggest improvements to U1 (storage optimizations, user behavior detection and security) that can also benefit similar systems. From an external viewpoint, we actively measure various Personal Clouds through their REST APIs for characterizing their QoS, such as transfer speed, variability and failure rate. We also demonstrate that combining open APIs and free accounts may lead to abuse by malicious parties, which motivates us to propose countermeasures to limit the impact of abusive applications in this scenario. In the Part II of this thesis, we study the storage QoS of social storage systems in terms of data availability, load balancing and transfer times. Our main interest is to understand the way intrinsic phenomena, such as the dynamics of users and the structure of their social relationships, limit the storage QoS of these systems, as well as to research novel mechanisms to ameliorate these limitations. Finally, we design and evaluate a hybrid architecture to enhance the QoS achieved by a social storage system that combines user resources and cloud storage to let users infer the right balance between user control and QoS

    MACHS: Mitigating the Achilles Heel of the Cloud through High Availability and Performance-aware Solutions

    Get PDF
    Cloud computing is continuously growing as a business model for hosting information and communication technology applications. However, many concerns arise regarding the quality of service (QoS) offered by the cloud. One major challenge is the high availability (HA) of cloud-based applications. The key to achieving availability requirements is to develop an approach that is immune to cloud failures while minimizing the service level agreement (SLA) violations. To this end, this thesis addresses the HA of cloud-based applications from different perspectives. First, the thesis proposes a component’s HA-ware scheduler (CHASE) to manage the deployments of carrier-grade cloud applications while maximizing their HA and satisfying the QoS requirements. Second, a Stochastic Petri Net (SPN) model is proposed to capture the stochastic characteristics of cloud services and quantify the expected availability offered by an application deployment. The SPN model is then associated with an extensible policy-driven cloud scoring system that integrates other cloud challenges (i.e. green and cost concerns) with HA objectives. The proposed HA-aware solutions are extended to include a live virtual machine migration model that provides a trade-off between the migration time and the downtime while maintaining HA objective. Furthermore, the thesis proposes a generic input template for cloud simulators, GITS, to facilitate the creation of cloud scenarios while ensuring reusability, simplicity, and portability. Finally, an availability-aware CloudSim extension, ACE, is proposed. ACE extends CloudSim simulator with failure injection, computational paths, repair, failover, load balancing, and other availability-based modules

    Utility-based Allocation of Resources to Virtual Machines in Cloud Computing

    Get PDF
    In recent years, cloud computing has gained a wide spread use as a new computing model that offers elastic resources on demand, in a pay-as-you-go fashion. One important goal of a cloud provider is dynamic allocation of Virtual Machines (VMs) according to workload changes in order to keep application performance to Service Level Agreement (SLA) levels, while reducing resource costs. The problem is to find an adequate trade-off between the two conflicting objectives of application performance and resource costs. In this dissertation, resource allocation solutions for this trade-off are proposed by expressing application performance and resource costs in a utility function. The proposed solutions allocate VM resources at the global data center level and at the local physical machine level by optimizing the utility function. The utility function, given as the difference between performance and costs, represents the profit of the cloud provider and offers the possibility to capture in a flexible and natural way the performance-cost trade-off. For global level resource allocation, a two-tier resource management solution is developed. In the first tier, local node controllers are located that dynamically allocate resource shares to VMs, so to maximize a local node utility function. In the second tier, there is a global controller that makes VM live migration decisions in order to maximize a global utility function. Experimental results show that optimizing the global utility function by changing the number of physical nodes according to workload maintains the performance at acceptable levels while reducing costs. To allocate multiple resources at the local physical machine level, a solution based on feed-back control theory and utility function optimization is proposed. This dynamically allocates shares to multiple resources of VMs such as CPU, memory, disk and network I/O bandwidth. In addressing the complex non-linearities that exist in shared virtualized infrastructures between VM performance and resource allocations, a solution is proposed that allocates VM resources to optimize a utility function based on application performance and power modelling. An Artificial Neural Network (ANN) is used to build an on- line model of the relationships between VM resource allocations and application performance, and another one between VM resource allocations and physical machine power. To cope with large utility optimization times in the case of an increased number of VMs, a distributed resource manager is proposed. It consists of several ANNs, each responsible for modelling and resource allocation of one VM, while exchanging information with other ANNs for coordinating resource allocations. Experiments, in simulated and realistic environments, show that the distributed ANN resource manager achieves better performance-power trade-offs than a centralized version and a distributed non-coordinated resource manager. To deal with the difficulty of building an accurate online application model and long model adaptation time, a solution that offers model-free resource management based on fuzzy control is proposed. It optimizes a utility function based on a hill-climbing search heuristic implemented as fuzzy rules. To cope with long utility optimization time in the case of an increased number of VMs, a multi-agent fuzzy controller is developed where each agent, in parallel with others, optimizes its own local utility function. The fuzzy control approach eliminates the need to build a model beforehand and provides a robust solution even for noisy measurements. Experimental results show that the multi-agent fuzzy controller performs better in terms of utility value than a centralized fuzzy control version and a state-of-the-art adaptive optimal control approach, especially for an increased number of VMs. Finally, to address some of the problems of reactive VM resource allocation approaches, a proactive resource allocation solution is proposed. This approach decides on VM resource allocations based on resource demand prediction, using a machine learning technique called Support Vector Machine (SVM). To deal with interdependencies between VMs of the same multi-tier application, cross- correlation demand prediction of multiple resource usage time series of all VMs of the multi-tier application is applied. As experiments show, this results in improved prediction accuracy and application performance
    corecore