317 research outputs found

    A Survey on Vertical and Horizontal Scaling Platforms for Big Data Analytics

    Get PDF
    There is no doubt that we are entering the era of big data. The challenge is on how to store, search, and analyze the huge amount of data that is being generated per second. One of the main obstacles to the big data researchers is how to find the appropriate big data analysis platform. The basic aim of this work is to present a complete investigation of all the available platforms for big data analysis in terms of vertical and horizontal scaling, and its compatible framework and applications in detail. Finally, this article will outline some research trends and other open issues in big data analytic

    Prediction based scaling in a distributed stream processing cluster

    Get PDF
    2020 Spring.Includes bibliographical references.Proliferation of IoT sensors and applications have enabled us to monitor and analyze scientific and social phenomena with continuously arriving voluminous data. To provide real-time processing capabilities over streaming data, distributed stream processing engines (DSPEs) such as Apache STORM and Apache FLINK have been widely deployed. These frameworks support computations over large-scale, high frequency streaming data. However, current on-demand auto-scaling features in these systems may result in an inefficient resource utilization which is closely related to cost effectiveness in popular cloud-based computing environments. We propose ARSTREAM, an auto-scaling computing environment that manages fluctuating throughputs for data from sensor networks, while ensuring efficient resource utilization. We have built an Artificial Neural Network model for predicting data processing queues and this model captures non-linear relationships between data arrival rates, resource utilization, and the size of data processing queue. If a bottleneck is predicted, ARSTREAM scales-out the current cluster automatically for current jobs without halting them at the user level. In addition, ARSTREAM incorporates threshold-based re-balancing to minimize data loss during extreme peak traffic that could not be predicted by our model. Our empirical benchmarks show that ARSTREAM forecasts data processing queue sizes with RMSE of 0.0429 when tested on real-time data

    Kafka-ML: Connecting the data stream with ML/AI frameworks

    Get PDF
    Machine Learning (ML) and Artificial Intelligence (AI) depend on data sources to train, improve, and make predictions through their algorithms. With the digital revolution and current paradigms like the Internet of Things, this information is turning from static data to continuous data streams. However, most of the ML/AI frameworks used nowadays are not fully prepared for this revolution. In this paper, we propose Kafka-ML, a novel and open-source framework that enables the management of ML/AI pipelines through data streams. Kafka-ML provides an accessible and user-friendly Web user interface where users can easily define ML models, to then train, evaluate, and deploy them for inferences. Kafka-ML itself and the components it deploys are fully managed through containerization technologies, which ensure their portability, easy distribution, and other features such as fault-tolerance and high availability. Finally, a novel approach has been introduced to manage and reuse data streams, which may eliminate the need for data storage or file systems.This work is funded by the Spanish projects RT2018-099777-B-100 (“rFOG: Improving Latency and Reliability of Offloaded Computation to the FOG for Critical Services”), PY20_00788 (“IntegraDos: Providing Real-Time Services for the Internet of Things through Cloud Sensor Integration”) and UMA18FEDERJA-215 (“Advanced Monitoring System Based on Deep Learning Services in Fog”). Cristian Martín was with a postdoc grant from the Spanish project TIC-1572 (“MIsTIca: Critical Infrastructures Monitoring based on Wireless Technologies”) and his research stay at IHP has been funded through a mobility grant from the University of Malaga and IHP funding. Funding for open access charge: Universidad de Malaga/CBUA . We are grateful for the work of all the reviewers who have greatly contributed to improving the quality of this article. We would like to express our gratitude to Kai Wähner for his inspiration and ideas through numerous articles and GitHub repositories on Kafka and its combination with TensorFlow

    When Two Choices Are not Enough: Balancing at Scale in Distributed Stream Processing

    Full text link
    Carefully balancing load in distributed stream processing systems has a fundamental impact on execution latency and throughput. Load balancing is challenging because real-world workloads are skewed: some tuples in the stream are associated to keys which are significantly more frequent than others. Skew is remarkably more problematic in large deployments: more workers implies fewer keys per worker, so it becomes harder to "average out" the cost of hot keys with cold keys. We propose a novel load balancing technique that uses a heaving hitter algorithm to efficiently identify the hottest keys in the stream. These hot keys are assigned to d≥2d \geq 2 choices to ensure a balanced load, where dd is tuned automatically to minimize the memory and computation cost of operator replication. The technique works online and does not require the use of routing tables. Our extensive evaluation shows that our technique can balance real-world workloads on large deployments, and improve throughput and latency by 150%\mathbf{150\%} and 60%\mathbf{60\%} respectively over the previous state-of-the-art when deployed on Apache Storm.Comment: 12 pages, 14 Figures, this paper is accepted and will be published at ICDE 201

    Technology Selection for Big Data and Analytical Applications

    Get PDF
    The term Big Data has become pervasive in recent years, as smart phones, televisions, washing machines, refrigerators, smart meters, diverse sensors, eyeglasses, and even clothes connect to the Internet. However, their generated data is essentially worthless without appropriate data analytics that utilizes information retrieval, statistics, as well as various other techniques. As Big Data is commonly too big for a single person or institution to investigate, appropriate tools are being used that go way beyond a traditional data warehouse and that have been developed in recent years. Unfortunately, there is no single solution but a large variety of different tools, each of which with distinct functionalities, properties and characteristics. Especially small and medium-sized companies have a hard time to keep track, as this requires time, skills, money, and specific knowledge that, in combination, result in high entrance barriers for Big Data utilization. This paper aims to reduce these barriers by explaining and structuring different classes of technologies and the basic criteria for proper technology selection. It proposes a framework that guides especially small and mid-sized companies through a suitable selection process that can serve as a basis for further advances
    • …
    corecore