208 research outputs found

    Autoscaling Hadoop Clusters

    Get PDF
    Pilve arvutused on viimaste aastate jooksul palju kõneainet pakkunud. Alates sellest, et tegemist ei ole millegi muuga kui virtualiseerimine ilusa nimega, kuni selleni, et tulevik on pilve arvutuste p aralt. Juba 4 aastat on virtuaalsed serverid, andmehoidlad, andmebaasid ja muud infrastruktuuri elemendid olnud k attesaadavad veebiteenustena. Antud töös me ehitame ise sklaleeruva MapReduce platvormi, mis baseerub vabalähtekoodiga tarkvara Apache Hadoop projektil. Antud platvorm skaleerib end ise, vastavalt serverite koormatusele k aivitab uusi servereid, et kiirendada arvutusprotsessi.Cloud computing, specifically Infrastructure as a Service model provides us with the facilities to provision new servers at will and increase the computing power of a cluster almost in real time. This provisioning and deprovisioning of servers can happen automatically based on some performance metrics of the cluster. We introduce a framework of autoscaling clusters in the private and public cloud ecosystem using the Eucalyptus and AWS software stack and use MapReduce as the service provided by the cluster

    A study on performance measures for auto-scaling CPU-intensive containerized applications

    Get PDF
    Autoscaling of containers can leverage performance measures from the different layers of the computational stack. This paper investigate the problem of selecting the most appropriate performance measure to activate auto-scaling actions aiming at guaranteeing QoS constraints. First, the correlation between absolute and relative usage measures and how a resource allocation decision can be influenced by them is analyzed in different workload scenarios. Absolute and relative measures could assume quite different values. The former account for the actual utilization of resources in the host system, while the latter account for the share that each container has of the resources used. Then, the performance of a variant of Kubernetes’ auto-scaling algorithm, that transparently uses the absolute usage measures to scale-in/out containers, is evaluated through a wide set of experiments. Finally, a detailed analysis of the state-of-the-art is presented

    Towards a Cloud Native Big Data Platform using MiCADO

    Get PDF
    In the big data era, creating self-managing scalable platforms for running big data applications is a fundamental task. Such self-managing and self-healing platforms involve a proper reaction to hardware (e.g., cluster nodes) and software (e.g., big data tools) failures, besides a dynamic resizing of the allocated resources based on overload and underload situations and scaling policies. The distributed and stateful nature of big data platforms (e.g., Hadoop-based cluster) makes the management of these platforms a challenging task. This paper aims to design and implement a scalable cloud native Hadoop-based big data platform using MiCADO, an open-source, and a highly customisable multi-cloud orchestration and auto-scaling framework for Docker containers, orchestrated by Kubernetes. The proposed MiCADO-based big data platform automates the deployment and enables an automatic horizontal scaling (in and out) of the underlying cloud infrastructure. The empirical evaluation of the MiCADO-based big data platform demonstrates how easy, efficient, and fast it is to deploy and undeploy Hadoop clusters of different sizes. Additionally, it shows how the platform can automatically be scaled based on user-defined policies (such as CPU-based scaling)

    Enabling autoscaling for in-memory storage in cluster computing framework

    Get PDF
    2019 Spring.Includes bibliographical references.IoT enabled devices and observational instruments continuously generate voluminous data. A large portion of these datasets are delivered with the associated geospatial locations. The increased volumes of geospatial data, alongside the emerging geospatial services, pose computational challenges for large-scale geospatial analytics. We have designed and implemented STRETCH , an in-memory distributed geospatial storage that preserves spatial proximity and enables proactive autoscaling for frequently accessed data. STRETCH stores data with a delayed data dispersion scheme that incrementally adds data nodes to the storage system. We have devised an autoscaling feature that proactively repartitions data to alleviate computational hotspots before they occur. We compared the performance of S TRETCH with Apache Ignite and the results show that STRETCH provides up to 3 times the throughput when the system encounters hotspots. STRETCH is built on Apache Spark and Ignite and interacts with them at runtime

    Dynamic Load Balancing and Autoscaling in Distributed Stream Processing Systems

    Get PDF
    In big data world, Hadoop and other batch-processing tools are widely used to analyze data and get results in minutes. However, minutes of latency still cannot satisfy the proliferated needs for real-time decision in many fields such as live stock and trading feeds in financial services, telecommunications, sensor networks, online advertisement, etc. Distributed stream processing (DSP) systems aim to process, analyze and make decisions on-the-fly based on immense quantities of data streams being dynamically generated at high rates. As the rates of data streams may vary over time, DSP systems require an architecture that is elastic to handle dynamic load. Although many dynamic load balancing and autoscaling techniques for general pull-based distributed systems have been well studied, these solutions cannot be directly applied to DSP systems because DSP systems are push-based, they process data streams with different types of operators, each running on a cluster node. One research problem is to allocate data processing operators on nodes of clusters and balance the workload dynamically. Since the data volume and rate can be unpredictable, static mapping between operators and cluster resources often results in unbalanced operator load distribution. Furthermore, the problem of making DSP system scalable requires autoscaling at runtime. In this context, the operators need to be relocated among newly provisioned nodes. The contribution of this thesis is three folds. First, we proposes a software layer that is load-adaptive between a DSP engine and clusters. The architecture allows dynamic transferring of an operator to different cluster nodes at runtime and keeps the process transparent to developers. Second, an optimization method that combines correlation of resource utilization of nodes and capacity of clusters is proposed to balance load dynamically. Lastly, we design the autoscaling mechanism and algorithm to detect overload and provision nodes at runtime. We implement our design on S4, an open-source DSP engine first developed by Yahoo!. The implementation is evaluated by a top-N topic list application on Twitter streams using clusters on Amazon Web Services. The results demonstrate a 75.79% improvement on stream processing throughputs, and a 294.47% improvement on cluster resource utilization

    A Self-managed Mesos Cluster for Data Analytics with QoS Guarantees

    Full text link
    [EN] This article describes the development of an automated configuration of a software platform for Data Analytics that supports horizontal and vertical elasticity to guarantee meeting a specific deadline. It specifies all the components, software dependencies and configurations required to build up the cluster, and analyses the deployment times of different instances, as well as the horizontal and vertical elasticity. The approach followed builds up self-managed hybrid clusters that can deal with different workloads and network requirements. The article describes the structure of the recipes, points out to public repositories where the code is available and discusses the limitations of the approach as well as the results of several experiments.The work presented in this article has been partially funded by a research grant from the regional government of the Comunitat Valenciana (Spain), co-funded by the European Union ERDF funds (European Regional Development Fund) of the Comunitat Valenciana 2014-2020, with reference IDIFEDER/2018/032 (High-Performance Algorithms for the Modelling, Simulation and early Detection of diseases in Personalized Medicine). The authors would also like to thank the Spanish "Ministerio de Economia, Industria y Competitividad" for the project "BigCLOE" with reference number TIN2016-79951-R.López-Huguet, S.; Pérez-González, AM.; Calatrava Arroyo, A.; Alfonso Laguna, CD.; Caballer Fernández, M.; Moltó, G.; Blanquer Espert, I. (2019). A Self-managed Mesos Cluster for Data Analytics with QoS Guarantees. Future Generation Computer Systems. 96:449-461. https://doi.org/10.1016/j.future.2019.02.047S4494619
    corecore