1,964 research outputs found

    A cooperative approach for distributed task execution in autonomic clouds

    Get PDF
    Virtualization and distributed computing are two key pillars that guarantee scalability of applications deployed in the Cloud. In Autonomous Cooperative Cloud-based Platforms, autonomous computing nodes cooperate to offer a PaaS Cloud for the deployment of user applications. Each node must allocate the necessary resources for customer applications to be executed with certain QoS guarantees. If the QoS of an application cannot be guaranteed a node has mainly two options: to allocate more resources (if it is possible) or to rely on the collaboration of other nodes. Making a decision is not trivial since it involves many factors (e.g. the cost of setting up virtual machines, migrating applications, discovering collaborators). In this paper we present a model of such scenarios and experimental results validating the convenience of cooperative strategies over selfish ones, where nodes do not help each other. We describe the architecture of the platform of autonomous clouds and the main features of the model, which has been implemented and evaluated in the DEUS discrete-event simulator. From the experimental evaluation, based on workload data from the Google Cloud Backend, we can conclude that (modulo our assumptions and simplifications) the performance of a volunteer cloud can be compared to that of a Google Cluster

    From security to assurance in the cloud: a survey

    Get PDF
    The cloud computing paradigm has become a mainstream solution for the deployment of business processes and applications. In the public cloud vision, infrastructure, platform, and software services are provisioned to tenants (i.e., customers and service providers) on a pay-as-you-go basis. Cloud tenants can use cloud resources at lower prices, and higher performance and flexibility, than traditional on-premises resources, without having to care about infrastructure management. Still, cloud tenants remain concerned with the cloud's level of service and the nonfunctional properties their applications can count on. In the last few years, the research community has been focusing on the nonfunctional aspects of the cloud paradigm, among which cloud security stands out. Several approaches to security have been described and summarized in general surveys on cloud security techniques. The survey in this article focuses on the interface between cloud security and cloud security assurance. First, we provide an overview of the state of the art on cloud security. Then, we introduce the notion of cloud security assurance and analyze its growing impact on cloud security approaches. Finally, we present some recommendations for the development of next-generation cloud security and assurance solutions

    Modeling cloud resources using machine learning

    Get PDF
    Cloud computing is a new Internet infrastructure paradigm where management optimization has become a challenge to be solved, as all current management systems are human-driven or ad-hoc automatic systems that must be tuned manually by experts. Management of cloud resources require accurate information about all the elements involved (host machines, resources, offered services, and clients), and some of this information can only be obtained a posteriori. Here we present the cloud and part of its architecture as a new scenario where data mining and machine learning can be applied to discover information and improve its management thanks to modeling and prediction. As a novel case of study we show in this work the modeling of basic cloud resources using machine learning, predicting resource requirements from context information like amount of load and clients, and also predicting the quality of service from resource planning, in order to feed cloud schedulers. Further, this work is an important part of our ongoing research program, where accurate models and predictors are essential to optimize cloud management autonomic systems.Postprint (published version

    A Game-Theoretic Based QoS-Aware Capacity Management for Real-Time EdgeIoT Applications

    Get PDF
    More and more real-time IoT applications such as smart cities or autonomous vehicles require big data analytics with reduced latencies. However, data streams produced from distributed sensing devices may not suffice to be processed traditionally in the remote cloud due to: (i) longer Wide Area Network (WAN) latencies and (ii) limited resources held by a single Cloud. To solve this problem, a novel Software-Defined Network (SDN) based InterCloud architecture is presented for mobile edge computing environments, known as EdgeIoT. An adaptive resource capacity management approach is proposed to employ a policy-based QoS control framework using principles in coalition games with externalities. To optimise resource capacity policy, the proposed QoS management technique solves, adaptively, a lexicographic ordering bi-criteria Coalition Structure Generation (CSG) problem. It is an onerous task to guarantee in a deterministic way that a real-time EdgeIoT application satisfies low latency requirement specified in Service Level Agreements (SLA). CloudSim 4.0 toolkit is used to simulate an SDN-based InterCloud scenario, and the empirical results suggest that the proposed approach can adapt, from an operational perspective, to ensure low latency QoS for real-time EdgeIoT application instances
    • …
    corecore