509 research outputs found

    Enabling modular design of an application-level auto-scaling and orchestration framework using tosca-based application description templates

    Get PDF
    This paper presents a novel approach to writing TOSCA templates for application reusability and portability in a modular auto-scaling and orchestration framework (MiCADO). The approach defines cloud resources as well as application containers in a flexible and generic way, and allows for those definitions to be extended with specific properties related to a desired container orchestrator chosen at deployment time. The approach is demonstrated in a proof-of-concept where only a minor change was required to a previously used application template in order to achieve the successful deployment and lifecycle management of the popular web authoring tool Wordpress on a new realization of the MiCADO framework featuring a different container orchestrator

    Describing and Processing Topology and Quality of Service Parameters of Applications in the Cloud

    Get PDF
    Typical cloud applications require high-level policy driven orchestration to achieve efficient resource utilisation and robust security to support different types of users and user scenarios. However, the efficient and secure utilisation of cloud resources to run applications is not trivial. Although there have been several efforts to support the coordinated deployment, and to a smaller extent the run-time orchestration of applications in the Cloud, no comprehensive solution has emerged until now that successfully leverages applications in an efficient, secure and seamless way. One of the major challenges is how to specify and manage Quality of Service (QoS) properties governing cloud applications. The solution to address these challenges could be a generic and pluggable framework that supports the optimal and secure deployment and run-time orchestration of applications in the Cloud. A specific aspect of such a cloud orchestration framework is the need to describe complex applications incorporating several services. These application descriptions must specify both the structure of the application and its QoS parameters, such as desired performance, economic viability and security. This paper proposes a cloud technology agnostic approach to application descriptions based on existing standards and describes how these application descriptions can be processed to manage applications in the Cloud

    Flexible Deployment of Social Media Analysis Tools, Flexible, Policy-Oriented and Multi-Cloud deployment of Social Media Analysis Tools in the COLA Project

    Get PDF
    The relationship between companies and customers and among public authorities and citizens has changed dramatically with the widespread utilisation of the Internet and Social Networks. To help governments to keep abreast of these changes, Inycom has developed Eccobuzz and Magician, a set of web applications for Social Media data mining. The unpredictable load of these applications requires flexible user-defined policies and automated scalability during deployment and execution time. Even more importantly, privacy norms require that data is restricted to certain physical locations. This paper explains how such applications are described with Application Description Templates (ADTs). ADTs define complex topology descriptions and various deployment, scalability and security policies, and how these templates are used by a submitter that translates this generic information into executable format for submission to the reference framework of the COLA European projec

    Towards a Cloud Native Big Data Platform using MiCADO

    Get PDF
    In the big data era, creating self-managing scalable platforms for running big data applications is a fundamental task. Such self-managing and self-healing platforms involve a proper reaction to hardware (e.g., cluster nodes) and software (e.g., big data tools) failures, besides a dynamic resizing of the allocated resources based on overload and underload situations and scaling policies. The distributed and stateful nature of big data platforms (e.g., Hadoop-based cluster) makes the management of these platforms a challenging task. This paper aims to design and implement a scalable cloud native Hadoop-based big data platform using MiCADO, an open-source, and a highly customisable multi-cloud orchestration and auto-scaling framework for Docker containers, orchestrated by Kubernetes. The proposed MiCADO-based big data platform automates the deployment and enables an automatic horizontal scaling (in and out) of the underlying cloud infrastructure. The empirical evaluation of the MiCADO-based big data platform demonstrates how easy, efficient, and fast it is to deploy and undeploy Hadoop clusters of different sizes. Additionally, it shows how the platform can automatically be scaled based on user-defined policies (such as CPU-based scaling)

    Towards a Deadline-Based Simulation Experimentation Framework Using Micro-Services Auto-Scaling Approach

    Get PDF
    There is growing number of research efforts in developing auto-scaling algorithms and tools for cloud resources. Traditional performance metrics such as CPU, memory and bandwidth usage for scaling up or down resources are not sufficient for all applications. For example, modeling and simulation experimentation is usually expected to yield results within a specific timeframe. In order to achieve this, often the quality of experiments is compromised either by restricting the parameter space to be explored or by limiting the number of replications required to give statistical confidence. In this paper, we present early stages of a deadline-based simulation experimentation framework using a micro-services auto-scaling approach. A case study of an agent-based simulation of a population physical activity behavior is used to demonstrate our framework

    Science Gateways with Embedded Ontology-based E-learning Support

    Get PDF
    Science gateways are widely utilised in a range of scientific disciplines to provide user-friendly access to complex distributed computing infrastructures. The traditional approach in science gateway development is to concentrate on this simplified resource access and provide scientists with a graphical user interface to conduct their experiments and visualise the results. However, as user communities behind these gateways are growing and opening their doors to less experienced scientists or even to the general public as “citizen scientists”, there is an emerging need to extend these gateways with training and learning support capabilities. This paper describes a novel approach showing how science gateways can be extended with embedded e-learning support using an ontology-based learning environment called Knowledge Repository Exchange and Learning (KREL). The paper also presents a prototype implementation of a science gateway for analysing earthquake data and demonstrates how the KREL can extend this gateway with ontology-based embedded e-learning support

    Generalization of the interaction between the Haar approximation and polynomial operators to higher order methods

    No full text
    International audienceIn applications it is useful to compute the local average of a function f(u) of an input u from empirical statistics on u. A very simple relation exists when the local averages are given by a Haar approximation. The question is to know if it holds for higher order approximation methods. To do so, it is necessary to use approximate product operators defined over linear approximation spaces. These products are characterized by a Strang and Fix like condition. An explicit construction of these product operators is exhibited for piecewise polynomial functions, using Hermite interpolation. The averaging relation which holds for the Haar approximation is then recovered when the product is defined by a two point Hermite interpolation

    Implementation of Grover's Quantum Search Algorithm in a Scalable System

    Full text link
    We report the implementation of Grover's quantum search algorithm in the scalable system of trapped atomic ion quantum bits. Any one of four possible states of a two-qubit memory is marked, and following a single query of the search space, the marked element is successfully recovered with an average probability of 60(2)%. This exceeds the performance of any possible classical search algorithm, which can only succeed with a maximum average probability of 50%.Comment: 4 pages, 3 figures, updated error discussio
    • …
    corecore