182 research outputs found
Performance Characterization of In-Memory Data Analytics on a Modern Cloud Server
In last decade, data analytics have rapidly progressed from traditional
disk-based processing to modern in-memory processing. However, little effort
has been devoted at enhancing performance at micro-architecture level. This
paper characterizes the performance of in-memory data analytics using Apache
Spark framework. We use a single node NUMA machine and identify the bottlenecks
hampering the scalability of workloads. We also quantify the inefficiencies at
micro-architecture level for various data analysis workloads. Through empirical
evaluation, we show that spark workloads do not scale linearly beyond twelve
threads, due to work time inflation and thread level load imbalance. Further,
at the micro-architecture level, we observe memory bound latency to be the
major cause of work time inflation.Comment: Accepted to The 5th IEEE International Conference on Big Data and
Cloud Computing (BDCloud 2015
Design and evaluation of a cloud native data analysis pipeline for cyber physical production systems
Since 1991 with the birth of the World Wide Web the rate of data growth has been growing with a record level in the last couple of years. Big companies
tackled down this data growth with expensive and enormous data centres to process and get value of this data. From social media, Internet of Things (IoT), new business process, monitoring and multimedia, the capacities of
those data centres started to be a problem and required continuos and expensive expansion. Thus, Big Data was something that only a few were able to access. This changed fast when Amazon launched Amazon Web Services (AWS) around 15 years ago and gave the origins to the public cloud.
At that time, the capabilities were still very new and reduced but 10 years later the cloud was a whole new business that changed for ever the Big Data business. This not only commoditised computer power but it was
accompanied by a price model that let medium and small players the possibility to access it. In consequence, new problems arised regarding the nature of these distributed systems and the software architectures required
for proper data processing. The present job analyse the type of typical Big Data workloads and propose an architecture for a cloud native data analysis
pipeline. Lastly, it provides a chapter for tools and services that can be used in the architecture taking advantage of their open source nature and the cloud
price models.Fil: Ferrer Daub, Facundo Javier. Universidad Católica de Córdoba. Instituto de Ciencias de la Administración; Argentin
Artificial intelligence driven anomaly detection for big data systems
The main goal of this thesis is to contribute to the research on automated performance anomaly detection and interference prediction by implementing Artificial Intelligence (AI) solutions for complex distributed systems, especially for Big Data platforms within cloud computing environments. The late detection and manual resolutions of performance anomalies and system interference in Big Data systems may lead to performance violations and financial penalties. Motivated by this issue, we propose AI-based methodologies for anomaly detection and interference prediction tailored to Big Data and containerized batch platforms to better analyze system performance and effectively utilize computing resources within cloud environments. Therefore, new precise and efficient performance management methods are the key to handling performance anomalies and interference impacts to improve the efficiency of data center resources.
The first part of this thesis contributes to performance anomaly detection for in-memory Big Data platforms. We examine the performance of Big Data platforms and justify our choice of selecting the in-memory Apache Spark platform. An artificial neural network-driven methodology is proposed to detect and classify performance anomalies for batch workloads based on the RDD characteristics and operating system monitoring metrics. Our method is evaluated against other popular machine learning algorithms (ML), as well as against four different monitoring datasets. The results prove that our proposed method outperforms other ML methods, typically achieving 98–99% F-scores. Moreover, we prove that a random start instant, a random duration, and overlapped anomalies do not significantly impact the performance of our proposed methodology.
The second contribution addresses the challenge of anomaly identification within an in-memory streaming Big Data platform by investigating agile hybrid learning techniques. We develop TRACK (neural neTwoRk Anomaly deteCtion in sparK) and TRACK-Plus, two methods to efficiently train a class of machine learning models for performance anomaly detection using a fixed number of experiments. Our model revolves around using artificial neural networks with Bayesian Optimization (BO) to find the optimal training dataset size and configuration parameters to efficiently train the anomaly detection model to achieve high accuracy. The objective is to accelerate the search process for finding the size of the training dataset, optimizing neural network configurations, and improving the performance of anomaly classification. A validation based on several datasets from a real Apache Spark Streaming system is performed, demonstrating that the proposed methodology can efficiently identify performance anomalies, near-optimal configuration parameters, and a near-optimal training dataset size while reducing the number of experiments up to 75% compared with naïve anomaly detection training.
The last contribution overcomes the challenges of predicting completion time of containerized batch jobs and proactively avoiding performance interference by introducing an automated prediction solution to estimate interference among colocated batch jobs within the same computing environment. An AI-driven model is implemented to predict the interference among batch jobs before it occurs within system. Our interference detection model can alleviate and estimate the task slowdown affected by the interference. This model assists the system operators in making an accurate decision to optimize job placement. Our model is agnostic to the business logic internal to each job. Instead, it is learned from system performance data by applying artificial neural networks to establish the completion time prediction of batch jobs within the cloud environments. We compare our model with three other baseline models (queueing-theoretic model, operational analysis, and an empirical method) on historical measurements of job completion time and CPU run-queue size (i.e., the number of active threads in the system). The proposed model captures multithreading, operating system scheduling, sleeping time, and job priorities. A validation based on 4500 experiments based on the DaCapo benchmarking suite was carried out, confirming the predictive efficiency and capabilities of the proposed model by achieving up to 10% MAPE compared with the other models.Open Acces
A Survey on Automatic Parameter Tuning for Big Data Processing Systems
Big data processing systems (e.g., Hadoop, Spark, Storm) contain a vast number of configuration parameters controlling parallelism, I/O behavior, memory settings, and compression. Improper parameter settings can cause significant performance degradation and stability issues. However, regular users and even expert administrators grapple with understanding and tuning them to achieve good performance. We investigate existing approaches on parameter tuning for both batch and stream data processing systems and classify them into six categories: rule-based, cost modeling, simulation-based, experiment-driven, machine learning, and adaptive tuning. We summarize the pros and cons of each approach and raise some open research problems for automatic parameter tuning.Peer reviewe
Spark versus Flink: Understanding Performance in Big Data Analytics Frameworks
International audienceBig Data analytics has recently gained increasing popularity as a tool to process large amounts of data on-demand. Spark and Flink are two Apache-hosted data analytics frameworks that facilitate the development of multi-step data pipelines using directly acyclic graph patterns. Making the most out of these frameworks is challenging because efficient executions strongly rely on complex parameter configurations and on an in-depth understanding of the underlying architectural choices. Although extensive research has been devoted to improving and evaluating the performance of such analytics frameworks, most of them benchmark the platforms against Hadoop, as a baseline, a rather unfair comparison considering the fundamentally different design principles. This paper aims to bring some justice in this respect, by directly evaluating the performance of Spark and Flink. Our goal is to identify and explain the impact of the different architectural choices and the parameter configurations on the perceived end-to-end performance. To this end, we develop a methodology for correlating the parameter settings and the operators execution plan with the resource usage. We use this methodology to dissect the performance of Spark and Flink with several representative batch and iterative workloads on up to 100 nodes. Our key finding is that there none of the two framework outperforms the other for all data types, sizes and job patterns. This paper performs a fine characterization of the cases when each framework is superior, and we highlight how this performance correlates to operators, to resource usage and to the specifics of the internal framework design
Evaluation and optimization of Big Data Processing on High Performance Computing Systems
Programa Oficial de Doutoramento en Investigación en Tecnoloxías da Información. 524V01[Resumo]
Hoxe en día, moitas organizacións empregan tecnoloxías Big Data para extraer
información de grandes volumes de datos. A medida que o tamaño destes volumes
crece, satisfacer as demandas de rendemento das aplicacións de procesamento
de datos masivos faise máis difícil. Esta Tese céntrase en avaliar e optimizar estas
aplicacións, presentando dúas novas ferramentas chamadas BDEv e Flame-MR. Por
unha banda, BDEv analiza o comportamento de frameworks de procesamento Big
Data como Hadoop, Spark e Flink, moi populares na actualidade. BDEv xestiona
a súa configuración e despregamento, xerando os conxuntos de datos de entrada
e executando cargas de traballo previamente elixidas polo usuario. Durante cada
execución, BDEv extrae diversas métricas de avaliación que inclúen rendemento,
uso de recursos, eficiencia enerxética e comportamento a nivel de microarquitectura.
Doutra banda, Flame-MR permite optimizar o rendemento de aplicacións Hadoop
MapReduce. En xeral, o seu deseño baséase nunha arquitectura dirixida por eventos
capaz de mellorar a eficiencia dos recursos do sistema mediante o solapamento da
computación coas comunicacións. Ademais de reducir o número de copias en memoria
que presenta Hadoop, emprega algoritmos eficientes para ordenar e mesturar os
datos. Flame-MR substitúe o motor de procesamento de datos MapReduce de xeito
totalmente transparente, polo que non é necesario modificar o código de aplicacións
xa existentes. A mellora de rendemento de Flame-MR foi avaliada de maneira exhaustiva
en sistemas clúster e cloud, executando tanto benchmarks estándar coma
aplicacións pertencentes a casos de uso reais. Os resultados amosan unha redución
de entre un 40% e un 90% do tempo de execución das aplicacións. Esta Tese proporciona
aos usuarios e desenvolvedores de Big Data dúas potentes ferramentas
para analizar e comprender o comportamento de frameworks de procesamento de
datos e reducir o tempo de execución das aplicacións sen necesidade de contar con
coñecemento experto para elo.[Resumen]
Hoy en día, muchas organizaciones utilizan tecnologías Big Data para extraer
información de grandes volúmenes de datos. A medida que el tamaño de estos volúmenes
crece, satisfacer las demandas de rendimiento de las aplicaciones de procesamiento
de datos masivos se vuelve más difícil. Esta Tesis se centra en evaluar y
optimizar estas aplicaciones, presentando dos nuevas herramientas llamadas BDEv
y Flame-MR. Por un lado, BDEv analiza el comportamiento de frameworks de procesamiento
Big Data como Hadoop, Spark y Flink, muy populares en la actualidad.
BDEv gestiona su configuración y despliegue, generando los conjuntos de datos de
entrada y ejecutando cargas de trabajo previamente elegidas por el usuario. Durante
cada ejecución, BDEv extrae diversas métricas de evaluación que incluyen rendimiento,
uso de recursos, eficiencia energética y comportamiento a nivel de microarquitectura.
Por otro lado, Flame-MR permite optimizar el rendimiento de aplicaciones
Hadoop MapReduce. En general, su diseño se basa en una arquitectura dirigida por
eventos capaz de mejorar la eficiencia de los recursos del sistema mediante el solapamiento
de la computación con las comunicaciones. Además de reducir el número
de copias en memoria que presenta Hadoop, utiliza algoritmos eficientes para ordenar
y mezclar los datos. Flame-MR reemplaza el motor de procesamiento de datos
MapReduce de manera totalmente transparente, por lo que no se necesita modificar
el código de aplicaciones ya existentes. La mejora de rendimiento de Flame-MR ha
sido evaluada de manera exhaustiva en sistemas clúster y cloud, ejecutando tanto
benchmarks estándar como aplicaciones pertenecientes a casos de uso reales. Los
resultados muestran una reducción de entre un 40% y un 90% del tiempo de ejecución
de las aplicaciones. Esta Tesis proporciona a los usuarios y desarrolladores de
Big Data dos potentes herramientas para analizar y comprender el comportamiento
de frameworks de procesamiento de datos y reducir el tiempo de ejecución de las
aplicaciones sin necesidad de contar con conocimiento experto para ello.[Abstract]
Nowadays, Big Data technologies are used by many organizations to extract
valuable information from large-scale datasets. As the size of these datasets increases,
meeting the huge performance requirements of data processing applications
becomes more challenging. This Thesis focuses on evaluating and optimizing these
applications by proposing two new tools, namely BDEv and Flame-MR. On the one
hand, BDEv allows to thoroughly assess the behavior of widespread Big Data processing
frameworks such as Hadoop, Spark and Flink. It manages the configuration
and deployment of the frameworks, generating the input datasets and launching the
workloads specified by the user. During each workload, it automatically extracts
several evaluation metrics that include performance, resource utilization, energy efficiency
and microarchitectural behavior. On the other hand, Flame-MR optimizes
the performance of existing Hadoop MapReduce applications. Its overall design is
based on an event-driven architecture that improves the efficiency of the system
resources by pipelining data movements and computation. Moreover, it avoids redundant
memory copies present in Hadoop, while also using efficient sort and merge
algorithms for data processing. Flame-MR replaces the underlying MapReduce data
processing engine in a transparent way and thus the source code of existing applications
does not require to be modified. The performance benefits provided by Flame-
MR have been thoroughly evaluated on cluster and cloud systems by using both
standard benchmarks and real-world applications, showing reductions in execution
time that range from 40% to 90%. This Thesis provides Big Data users with powerful
tools to analyze and understand the behavior of data processing frameworks and
reduce the execution time of the applications without requiring expert knowledge
Resource optimization of edge servers dealing with priority-based workloads by utilizing service level objective-aware virtual rebalancing
IoT enables profitable communication between sensor/actuator devices and the cloud. Slow network causing Edge data to lack Cloud analytics hinders real-time analytics adoption. VRebalance solves priority-based workload performance for stream processing at the Edge. BO is used in VRebalance to prioritize workloads and find optimal resource configurations for efficient resource management. Apache Storm platform was used with RIoTBench IoT benchmark tool for real-time stream processing. Tools were used to evaluate VRebalance. Study shows VRebalance is more effective than traditional methods, meeting SLO targets despite system changes. VRebalance decreased SLO violation rates by almost 30% for static priority-based workloads and 52.2% for dynamic priority-based workloads compared to hill climbing algorithm. Using VRebalance decreased SLO violations by 66.1% compared to Apache Storm\u27s default allocation
- …