1,903 research outputs found

    Management And Security Of Multi-Cloud Applications

    Get PDF
    Single cloud management platform technology has reached maturity and is quite successful in information technology applications. Enterprises and application service providers are increasingly adopting a multi-cloud strategy to reduce the risk of cloud service provider lock-in and cloud blackouts and, at the same time, get the benefits like competitive pricing, the flexibility of resource provisioning and better points of presence. Another class of applications that are getting cloud service providers increasingly interested in is the carriers\u27 virtualized network services. However, virtualized carrier services require high levels of availability and performance and impose stringent requirements on cloud services. They necessitate the use of multi-cloud management and innovative techniques for placement and performance management. We consider two classes of distributed applications – the virtual network services and the next generation of healthcare – that would benefit immensely from deployment over multiple clouds. This thesis deals with the design and development of new processes and algorithms to enable these classes of applications. We have evolved a method for optimization of multi-cloud platforms that will pave the way for obtaining optimized placement for both classes of services. The approach that we have followed for placement itself is predictive cost optimized latency controlled virtual resource placement for both types of applications. To improve the availability of virtual network services, we have made innovative use of the machine and deep learning for developing a framework for fault detection and localization. Finally, to secure patient data flowing through the wide expanse of sensors, cloud hierarchy, virtualized network, and visualization domain, we have evolved hierarchical autoencoder models for data in motion between the IoT domain and the multi-cloud domain and within the multi-cloud hierarchy

    FLA-SLA aware cloud collation formation using fuzzy preference relationship multi-decision approach for federated cloud

    Get PDF
    Cloud Computing provides a solution to enterprise applications in resolving their services at all level of Software, Platform, and Infrastructure. The current demand of resources for large enterprises and their specific requirement to solve critical issues of services to their clients like avoiding resources contention, vendor lock-in problems and achieving high QoS (Quality of Service) made them move towards the federated cloud. The reliability of the cloud has become a challenge for cloud providers to provide resources at an instance request satisfying all SLA (Service Level Agreement) requirements for different consumer applications. To have better collation among cloud providers, FLA (Federated Level Agreement) are given much importance to get consensus in terms of various KPI’s (Key Performance Indicator’s) of the individual cloud providers. This paper proposes an FLA-SLA Aware Cloud Collation Formation algorithm (FS-ACCF) considering both FLA and SLA as major features affecting the collation formation to satisfy consumer request instantly. In FS-ACCF algorithm, fuzzy preference relationship multi-decision approach was used to validate the preferences among cloud providers for forming collation and gaining maximum profit. Finally, the results of FS-ACCF were compared with S-ACCF (SLA Aware Collation Formation) algorithm for 6 to 10 consecutive requests of cloud consumers with varied VM configurations for different SLA parameters like response time, process time and availability

    Enabling Scalable and Sustainable Softwarized 5G Environments

    Get PDF
    The fifth generation of telecommunication systems (5G) is foreseen to play a fundamental role in our socio-economic growth by supporting various and radically new vertical applications (such as Industry 4.0, eHealth, Smart Cities/Electrical Grids, to name a few), as a one-fits-all technology that is enabled by emerging softwarization solutions \u2013 specifically, the Fog, Multi-access Edge Computing (MEC), Network Functions Virtualization (NFV) and Software-Defined Networking (SDN) paradigms. Notwithstanding the notable potential of the aforementioned technologies, a number of open issues still need to be addressed to ensure their complete rollout. This thesis is particularly developed towards addressing the scalability and sustainability issues in softwarized 5G environments through contributions in three research axes: a) Infrastructure Modeling and Analytics, b) Network Slicing and Mobility Management, and c) Network/Services Management and Control. The main contributions include a model-based analytics approach for real-time workload profiling and estimation of network key performance indicators (KPIs) in NFV infrastructures (NFVIs), as well as a SDN-based multi-clustering approach to scale geo-distributed virtual tenant networks (VTNs) and to support seamless user/service mobility; building on these, solutions to the problems of resource consolidation, service migration, and load balancing are also developed in the context of 5G. All in all, this generally entails the adoption of Stochastic Models, Mathematical Programming, Queueing Theory, Graph Theory and Team Theory principles, in the context of Green Networking, NFV and SDN

    Scalable and Highly Available Database Systems in the Cloud

    Get PDF
    Cloud computing allows users to tap into a massive pool of shared computing resources such as servers, storage, and network. These resources are provided as a service to the users allowing them to “plug into the cloud” similar to a utility grid. The promise of the cloud is to free users from the tedious and often complex task of managing and provisioning computing resources to run applications. At the same time, the cloud brings several additional benefits including: a pay-as-you-go cost model, easier deployment of applications, elastic scalability, high availability, and a more robust and secure infrastructure. One important class of applications that users are increasingly deploying in the cloud is database management systems. Database management systems differ from other types of applications in that they manage large amounts of state that is frequently updated, and that must be kept consistent at all scales and in the presence of failure. This makes it difficult to provide scalability and high availability for database systems in the cloud. In this thesis, we show how we can exploit cloud technologies and relational database systems to provide a highly available and scalable database service in the cloud. The first part of the thesis presents RemusDB, a reliable, cost-effective high availability solution that is implemented as a service provided by the virtualization platform. RemusDB can make any database system highly available with little or no code modifications by exploiting the capabilities of virtualization. In the second part of the thesis, we present two systems that aim to provide elastic scalability for database systems in the cloud using two very different approaches. The three systems presented in this thesis bring us closer to the goal of building a scalable and reliable transactional database service in the cloud

    Utility-based Allocation of Resources to Virtual Machines in Cloud Computing

    Get PDF
    In recent years, cloud computing has gained a wide spread use as a new computing model that offers elastic resources on demand, in a pay-as-you-go fashion. One important goal of a cloud provider is dynamic allocation of Virtual Machines (VMs) according to workload changes in order to keep application performance to Service Level Agreement (SLA) levels, while reducing resource costs. The problem is to find an adequate trade-off between the two conflicting objectives of application performance and resource costs. In this dissertation, resource allocation solutions for this trade-off are proposed by expressing application performance and resource costs in a utility function. The proposed solutions allocate VM resources at the global data center level and at the local physical machine level by optimizing the utility function. The utility function, given as the difference between performance and costs, represents the profit of the cloud provider and offers the possibility to capture in a flexible and natural way the performance-cost trade-off. For global level resource allocation, a two-tier resource management solution is developed. In the first tier, local node controllers are located that dynamically allocate resource shares to VMs, so to maximize a local node utility function. In the second tier, there is a global controller that makes VM live migration decisions in order to maximize a global utility function. Experimental results show that optimizing the global utility function by changing the number of physical nodes according to workload maintains the performance at acceptable levels while reducing costs. To allocate multiple resources at the local physical machine level, a solution based on feed-back control theory and utility function optimization is proposed. This dynamically allocates shares to multiple resources of VMs such as CPU, memory, disk and network I/O bandwidth. In addressing the complex non-linearities that exist in shared virtualized infrastructures between VM performance and resource allocations, a solution is proposed that allocates VM resources to optimize a utility function based on application performance and power modelling. An Artificial Neural Network (ANN) is used to build an on- line model of the relationships between VM resource allocations and application performance, and another one between VM resource allocations and physical machine power. To cope with large utility optimization times in the case of an increased number of VMs, a distributed resource manager is proposed. It consists of several ANNs, each responsible for modelling and resource allocation of one VM, while exchanging information with other ANNs for coordinating resource allocations. Experiments, in simulated and realistic environments, show that the distributed ANN resource manager achieves better performance-power trade-offs than a centralized version and a distributed non-coordinated resource manager. To deal with the difficulty of building an accurate online application model and long model adaptation time, a solution that offers model-free resource management based on fuzzy control is proposed. It optimizes a utility function based on a hill-climbing search heuristic implemented as fuzzy rules. To cope with long utility optimization time in the case of an increased number of VMs, a multi-agent fuzzy controller is developed where each agent, in parallel with others, optimizes its own local utility function. The fuzzy control approach eliminates the need to build a model beforehand and provides a robust solution even for noisy measurements. Experimental results show that the multi-agent fuzzy controller performs better in terms of utility value than a centralized fuzzy control version and a state-of-the-art adaptive optimal control approach, especially for an increased number of VMs. Finally, to address some of the problems of reactive VM resource allocation approaches, a proactive resource allocation solution is proposed. This approach decides on VM resource allocations based on resource demand prediction, using a machine learning technique called Support Vector Machine (SVM). To deal with interdependencies between VMs of the same multi-tier application, cross- correlation demand prediction of multiple resource usage time series of all VMs of the multi-tier application is applied. As experiments show, this results in improved prediction accuracy and application performance

    Computing with Chunks

    Get PDF
    Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (p. 145-150).Modern computing substrates like general-purpose GPUs, massively multi-core processors, and cloud computing clusters offer practically unlimited resources at the cost of requiring programmers to manage those resources for correctness, efficiency, and performance. Instead of using generic abstractions to write their programs, programmers of modern computing substrates are forced to structure their programs around available hardware. This thesis argues for a new generic machine abstraction, the Chunk Model, that explicitly exposes program and machine structure, making it easier to program modern computing substrates. In the Chunk Model, fixed-sized chunks replace the flat virtual memories of traditional computing models. Chunks may link to other chunks, preserving the structure of and important relationships within data and programs as an explicit graph of chunks. Since chunks are limited in size, large data structures must be divided into many chunks, exposing the structure of programs and data structures to run-time systems. Those run-time systems, in turn, may optimize run-time execution, both for ease of programming and performance, based on the exposed structure. This thesis describes a full computing stack that implements the Chunk Model. At the bottom layer is a distributed chunk memory that exploits locality of hardware components while still providing programmer-friendly consistency semantics and distributed garbage collection. On top of the distributed chunk memory, we build a virtual machine that stores all run-time state in chunks, enabling computation to be distributed through the distributed chunk memory system. All of these features are aimed at making it easier to program modern computing substrates. This thesis evaluates the Chunk Model through example applications in cloud computing, scientific computing, and shared client/server computing.by Justin Mazzola Paluska.Sc.D

    Ecosystemic Evolution Feeded by Smart Systems

    Get PDF
    Information Society is advancing along a route of ecosystemic evolution. ICT and Internet advancements, together with the progression of the systemic approach for enhancement and application of Smart Systems, are grounding such an evolution. The needed approach is therefore expected to evolve by increasingly fitting into the basic requirements of a significant general enhancement of human and social well-being, within all spheres of life (public, private, professional). This implies enhancing and exploiting the net-living virtual space, to make it a virtuous beneficial integration of the real-life space. Meanwhile, contextual evolution of smart cities is aiming at strongly empowering that ecosystemic approach by enhancing and diffusing net-living benefits over our own lived territory, while also incisively targeting a new stable socio-economic local development, according to social, ecological, and economic sustainability requirements. This territorial focus matches with a new glocal vision, which enables a more effective diffusion of benefits in terms of well-being, thus moderating the current global vision primarily fed by a global-scale market development view. Basic technological advancements have thus to be pursued at the system-level. They include system architecting for virtualization of functions, data integration and sharing, flexible basic service composition, and end-service personalization viability, for the operation and interoperation of smart systems, supporting effective net-living advancements in all application fields. Increasing and basically mandatory importance must also be increasingly reserved for human–technical and social–technical factors, as well as to the associated need of empowering the cross-disciplinary approach for related research and innovation. The prospected eco-systemic impact also implies a social pro-active participation, as well as coping with possible negative effects of net-living in terms of social exclusion and isolation, which require incisive actions for a conformal socio-cultural development. In this concern, speed, continuity, and expected long-term duration of innovation processes, pushed by basic technological advancements, make ecosystemic requirements stricter. This evolution requires also a new approach, targeting development of the needed basic and vocational education for net-living, which is to be considered as an engine for the development of the related ‘new living know-how’, as well as of the conformal ‘new making know-how’

    Infrastructure-as-a-Service Usage Determinants in Enterprises

    Get PDF
    The thesis focuses on the research question, what the determinants of Infrastructure-as-a-Service usage of enterprises are. A wide range of IaaS determinants is collected for an IaaS adoption model of enterprises, which is evaluated in a Web survey. As the economical determinants are especially important, they are separately investigated using a cost-optimizing decision support model. This decision support model is then applied to a potential IaaS use case of a large automobile manufacturer
    corecore