11 research outputs found

    A methodology for Spark parameter tuning

    Get PDF
    Spark has been established as an attractive platform for big data analysis, since it manages to hide most of the complexities related to parallelism, fault tolerance and cluster setting from developers. However, this comes at the expense of having over 150 configurable parameters, the impact of which cannot be exhaustively examined due to the exponential amount of their combinations. The default values allow developers to quickly deploy their applications but leave the question as to whether performance can be improved open. In this work, we investigate the impact of the most important tunable Spark parameters with regards to shuffling, compression and serialization on the application performance through extensive experimentation using the Spark-enabled Marenostrum III (MN3) computing infrastructure of the Barcelona Supercomputing Center. The overarching aim is to guide developers on how to proceed to changes to the default values. We build upon our previous work, where we mapped our experience to a trial-and-error iterative improvement methodology for tuning parameters in arbitrary applications based on evidence from a very small number of experimental runs. The main contribution of this work is that we propose an alternative systematic methodology for parameter tuning, which can be easily applied onto any computing infrastructure and is shown to yield comparable if not better results than the initial one when applied to MN3; observed speedups in our validating test case studies start from 20%. In addition, the new methodology can rely on runs using samples instead of runs on the complete datasets, which render it significantly more practical.Peer ReviewedPostprint (author's final draft

    Automatically configuring parallelism for hybrid layouts

    Get PDF
    Distributed processing frameworks process data in parallel by dividing it into multiple partitions and each partition is processed in a separate task. The number of tasks is always created based on the total file size. However, this can lead to launch more tasks than needed in the case of hybrid layouts, because they help to read less data for certain operations (i.e., projection, selection). The over-provisioning of tasks may increase the job execution time and induce significant waste of computing resources. The latter due to the fact that each task introduces extra overhead (e.g., initialization, garbage collection, etc.). To allow a more efficient use of resources and reduce the job execution time, we propose a cost-based approach that decides the number of tasks based on the data being read. The proposed cost-model can be utilized in a multi-objective approach to decide both the number of tasks and number of machines for execution.Peer ReviewedPostprint (author's final draft

    Performance modelling, analysis and prediction of Spark jobs in Hadoop cluster : a thesis by publications presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science, School of Mathematical & Computational Sciences, Massey University, Auckland, New Zealand

    Get PDF
    Big Data frameworks have received tremendous attention from the industry and from academic research over the past decade. The advent of distributed computing frameworks such as Hadoop MapReduce and Spark are powerful frameworks that offer an efficient solution for analysing large-scale datasets running under the Hadoop cluster. Spark has been established as one of the most popular large-scale data processing engines because of its speed, low latency in-memory computation, and advanced analytics. Spark computational performance heavily depends on the selection of suitable parameters, and the configuration of these parameters is a challenging task. Although Spark has default parameters and can deploy applications without much effort, a significant drawback of default parameter selection is that it is not always the best for cluster performance. A major limitation for Spark performance prediction using existing models is that it requires either large input data or system configuration that is time-consuming. Therefore, an analytical model could be a better solution for performance prediction and for establishing appropriate job configurations. This thesis proposes two distinct parallelisation models for performance prediction: the 2D-Plate model and the Fully-Connected Node model. Both models were constructed based on serial boundaries for a certain arrangement of executors and size of the data. In order to evaluate the cluster performance, various HiBench workloads were used, and workload’s empirical data were fitted with the models for performance prediction analysis. The developed models were benchmarked with the existing models such as Amdahl’s, Gustafson, ERNEST, and machine learning. Our experimental results show that the two proposed models can quickly and accurately predict performance in terms of runtime, and they can outperform the accuracy of machine learning models when extrapolating predictions

    Hive on spark and MapReduce : a methodology for parameter tuning

    Get PDF
    Project Work presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies ManagementAs the era of “big data” has arrived, more and more companies start using distributed file systems to manage and process their data streams like the Hadoop distributed file system framework (HDFS). This software library offers a way to store large files across multiple machines. Large data sets are processed by using its inherent programming model MapReduce. Apache Spark is a relatively new alternative to Hadoop MapReduce and claims to offer a performance boost up to 10 times for certain applications, while maintaining its automatic fault tolerance. To leverage the Data Warehouse capabilities of Hadoop Apache Hive was introduced. It is a concept for Big Data analytics that works on top of Hadoop and provides data analysis tools and most importantly translates queries to MapReduce and Spark jobs. Therefore, it exploits the scalability of Hadoop and offers data exploration and mining capabilities to non-developers. However, it is difficult for users to utilize the full potential of the Apache Spark execution engine. This results in very long execution times. Therefore, this project work gives researches and companies a tuning methodology that significantly can improve the execution time of queries. As a result, this tuning methodology could optimize a real-world batch-processing query by 5 times. Moreover, it gives insides in the underlying reasons of this big improvement by using Apache Spark Monitoring tools. The result can be helpful for many practitioners and researchers that would like to optimise the performance of Spark and MapReduce queries executed in Hive on top of an Apache Hadoop cluster

    Escalonador distribuído de tarefas para o Apache Spark

    Get PDF
    Data intensive frameworks have a featured role in industry and academia. They make computation distribution and parallelization transparent to data scientists providing them greater productivity in data analytics application development like machine learning and deep learning algorithms implementations. However, in order to provide this abstraction layer several issues emerge like obtaining good performance when using many processing units, handling communication and coping with storage devices. One of the most relevant challenges is related with task scheduling. Perform this action in an efficient and scalable manner is very important, since it impacts significantly computing resources performance and utilization. This work proposes an hierarchical manner of distributing task scheduling in data intensive frameworks, introducing the scheduler assistant whose role is alleviate central scheduler job as it takes responsibility of a share of the scheduling load. We use Apache Spark for implementing a version of the hierarchical distributed task scheduler and to perform comparative experiments for testing the proposal scalability. Using 32 computational nodes, results show our proof of concept maintains execution times values similar to those found with the original version of Apache Spark. Moreover we show that deploying scheduler assistants the system can better utilize computational nodes processors during experiments executions. Finally, we expose a bottleneck due the centralization of scheduling decisions at Apache Spark execution model.Frameworks de análises de grandes quantidades de dados possuem um papel de destaque na academia e na indústria. Eles tornam questões de paralelização e distribuição de computação transparentes a cientistas de dados, proporcionando-lhes maior produtividade no desenvolvimento e implementação de aplicações para análise de dados como algoritmos de aprendizagem de máquina e análises em streaming de dados. Entretanto, prover essa camada de abstração traz grandes desafios ao envolver questões como alto desempenho no uso de diversas unidades de processamento, nos mecanismos de comunicação e nos dispositivos de armazenamento. Um dos principais desafios está relacionado ao escalonamento de tarefas. Realizar essa ação de forma eficiente e escalável é de suma importância, visto que ela impacta significantemente o desempenho e a utilização de recursos computacionais. Este trabalho propõe uma estratégia hierárquica de distribuir o escalonamento de tarefas em Frameworks de análises de grandes quantidades de dados, introduzindo o escalonador assistente, cuja função é aliviar a carga do escalonador centralizado ao se responsabilizar por uma fração da carga de escalonamento. O Apache Spark foi o Framework utilizado para implementarmos uma versão do escalonador de tarefas distribuído hierárquico e para realizarmos experimentos comparativos para testar a escalabilidade da proposta. Utilizando 32 nós computacionais, os resultados mostram que a nossa prova de conceito mantém valores para tempos de execução semelhantes aos do Apache Spark original. Além disso, mostramos que, com o emprego dos escalonadores assistentes, conseguimos fazer melhor uso dos processadores dos nós durante a execução dos experimentos

    Co-Tuning of Cloud Infrastructure and Distributed Data Processing Platforms

    Full text link
    Distributed Data Processing Platforms (e.g., Hadoop, Spark, and Flink) are widely used to store and process data in a cloud environment. These platforms distribute the storage and processing of data among the computing nodes of a cloud. The efficient use of these platforms requires users to (i) configure the cloud i.e., determine the number and type of computing nodes, and (ii) tune the configuration parameters (e.g., data replication factor) of the platform. However, both these tasks require in-depth knowledge of the cloud infrastructure and distributed data processing platforms. Therefore, in this paper, we first study the relationship between the configuration of the cloud and the configuration of distributed data processing platforms to determine how cloud configuration impacts platform configuration. After understanding the impacts, we propose a co-tuning approach for recommending optimal co-configuration of cloud and distributed data processing platforms. The proposed approach utilizes machine learning and optimization techniques to maximize the performance of the distributed data processing system deployed on the cloud. We evaluated our approach for Hadoop, Spark, and Flink in a cluster deployed on the OpenStack cloud. We used three benchmarking workloads (WordCount, Sort, and K-means) in our evaluation. Our results reveal that, in comparison to default settings, our co-tuning approach reduces execution time by 17.5% and $ cost by 14.9% solely via configuration tuning

    Black or White? How to Develop an AutoTuner for Memory-based Analytics [Extended Version]

    Full text link
    There is a lot of interest today in building autonomous (or, self-driving) data processing systems. An emerging school of thought is to leverage AI-driven "black box" algorithms for this purpose. In this paper, we present a contrarian view. We study the problem of autotuning the memory allocation for applications running on modern distributed data processing systems. For this problem, we show that an empirically-driven "white-box" algorithm, called RelM, that we have developed provides a close-to-optimal tuning at a fraction of the overheads compared to state-of-the-art AI-driven "black box" algorithms, namely, Bayesian Optimization (BO) and Deep Distributed Policy Gradient (DDPG). The main reason for RelM's superior performance is that the memory management in modern memory-based data analytics systems is an interplay of algorithms at multiple levels: (i) at the resource-management level across various containers allocated by resource managers like Kubernetes and YARN, (ii) at the container level among the OS, pods, and processes such as the Java Virtual Machine (JVM), (iii) at the application level for caching, aggregation, data shuffles, and application data structures, and (iv) at the JVM level across various pools such as the Young and Old Generation. RelM understands these interactions and uses them in building an analytical solution to autotune the memory management knobs. In another contribution, called GBO, we use the RelM's analytical models to speed up Bayesian Optimization. Through an evaluation based on Apache Spark, we showcase that RelM's recommendations are significantly better than what commonly-used Spark deployments provide, and are close to the ones obtained by brute-force exploration; while GBO provides optimality guarantees for a higher, but still significantly lower compared to the state-of-the-art AI-driven policies, cost overhead.Comment: Main version in ACM SIGMOD 202

    Automatic physical layer tuning of mapreduce-based query processing engines

    Get PDF
    Orientador: Eduardo Cunha de AlmeidaTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 29/06/2020Inclui referências: p. 98-109Área de concentração: Ciência da ComputaçãoResumo: A crescente necessidade de processar grandes quantidades de dados semi-estruturados e nãoestruturados levou ao desenvolvimento de mecanismos de processamento especializados como o MapReduce. O MapReduce é um modelo de programação projetado para processar grandes quantidades de dados semiestruturados de maneira distribuída e paralela. Os sistemas SQLon-Hadoop são interfaces SQL construídas sobre os mecanismos de processamento baseados em MapReduce para consultar grandes quantidades de dados semi-estruturados. No entanto, o número de máquinas, o número de sistemas na pilha de software e os mecanismos de controle fornecidos pelos mecanismos do MapReduce aumentam a complexidade e os custos operacionais de um cluster SQL-on-Hadoop. O aumento do desempenho dos motores de processamento MapReduce é um fator chave que pode ser alcançado delegando a quantidade certa de recursos físicos para suas tarefas. No entanto, usuários e até administradores especializados lutam para entender e ajustar as tarefas MapReduce para obter um desempenho melhor. A falta de conhecimento para ajustar as tarefas MapReduce deu origem a uma linha de pesquisa bem-sucedida sobre o ajuste automático dos parâmetros do MapReduce, originando vários Orientadores de Ajuste. No entanto, o problema de ajustar automaticamente as consultas SQL-no-Hadoop permanece amplamente inexplorado, pois a abordagem atual da aplicação dos Orientadores de Ajuste projetados para MapReduce em consultas SQL-on-Hadoop acarreta em vários problemas. Por exemplo, o processador de consultas do Hive, um sistema SQL-on-Hadoop popular, traduz consultas HiveQL em grafos de tarefas MapReduce, e seria fácil supor que, ajustando as configurações do motor de processamento MapReduce, as consultas HiveQL também se beneficiariam. Entretanto, essa suposição não se aplica quando os Orientadores de Ajuste existentes são aplicados ingenuamente às consultas HiveQL devido a arquitetura do Hive, Hadoop e dos Orientadores de Ajuste. Nesta tese tratamos da questão de como ajustar corretamente as consultas SQL-no-Hadoop. Por "corretamente", entendemos que, ao ajustar as configurações das consultas SQL-no-Hadoop, a geração das configurações deve considerar várias características que estão presentes apenas em tarefas geradas pelos sistemas SQL-no-Hadoop. Essas características incluem: (i) no caso de consultas individuais, todas as tarefas MapReduce que constituem o plano de consulta desta consulta são executadas com configurações idênticas. (ii) apesar da busca e geração das configurações de ajuste serem realizadas para cada tarefa MapReduce, apenas uma configuração de ajuste é selecionada e aplicada à consulta e as demais configurações de ajuste são simplesmente descartadas. (iii) Os Orientadores de Ajuste do Hadoop tratam as funções do MapReduce como caixas-pretas e fazem suposições de modelagem simplificadoras que podem valer para tarefas clássicas do MapReduce (Sort, Grep), mas não são verdadeiras para consultas do tipo SQL como o HiveQL, onde as tarefas contêm vários operadores de álgebra relacional como junções e agregadores. Estendemos o processador de consultas do Hive para ajustar as consultas SQL-no-Hadoop. Esta extensão compreende uma abordagem chamada de ajuste não-uniforme que permite que os sistemas SQL-on-Hadoop tenham um controle mais refinado da configuração das consultas, onde cada tarefa MapReduce recebe uma configuração especializada. Apresentamos um modelo conceitual, chamado assinatura de código, que usa informações estáticas disponíveis antes da execução de cada tafera para mapear tarefas que tenham padrões de consumo de recursos similares. Também apresentamos um cache que armazena configurações de ajuste, geradas por algum Orientadore de Ajuste, e as recicla entre tarefas que possuem consumo de recursos semelhantes. Nossa extensão funciona em conjunto como uma solução única para o ajuste automático de consultas SQL-no-Hadoop. Para validar nossa solução, realizamos um estudo experimental focado no Hive executando sobre o Hadoop porque (i) O Hive é um bom representante dos sistemas SQL-on-Hadoop nativos (como o System-R fez para os sistemas de bancos de dados relacionais); (ii) o Hive e o Hadoop são altamente populares para processamento analítico; e (iii) O ajuste de parâmetros do Hadoop foi estudado extensivamente nos últimos anos. Para preencher o cache de ajuste, empregamos o Starfish, o primeiro Orientador de Ajuste baseado em custo que encontra configurações (quase) ótimas e é o único Orientador de Ajuste disponível ao público para fins de pesquisa acadêmica. Em nossos experimentos, apresentamos que as consultas otimizadas com nossa abordagem de ajuste apresentaram acelerações de até 25%, contrastando com a abordagem atual que degradou o desempenho em várias ocasiões. Especificamente, a abordagem atual de ajuste pode causar variações no tempo de execução entre -171% e 27% em relação à configuração padrão. Mais importante ainda, nosso método de ajuste leva a uma melhor utilização de recursos, diminuindo o uso da CPU e a paginação de memória em até 40%. Nossa abordagem também reduziu a quantidade total de dados gravados em discos em 5×. Nossa abordagem de ajuste tem um cache usado para evitar a recriação de perfis de tarefas MapReduce semelhantes. Nosso cache reduziu a geração de perfils em 50% para a carga de trabalho TPC-H, permitindo até o ajuste parcial de consultas ad-hoc antes de sua execução. Palavras-chave: Sintonia da camada física. Processamento de consulta em MapReduce. SQL-On-Hadoop.Abstract: The increasing need to process large amounts of semi- and non-structured data has led to the development of specialized processing engines like MapReduce. MapReduce is a programming model designed to process large-scale semi-structured data in a distributed and parallel fashion. SQL-on-Hadoop systems are SQL-like interfaces build on top of MapReduce processing engines to query semi-structured data in large-scale. However, the number of computing nodes, the number of systems in the software stack, and the controlling mechanisms provided by MapReduce engines increase the complexity and the operational costs of maintaining a large SQL-on-Hadoop cluster. Increasing performance of such engines is a key factor that can be achieved by delegating the right amount of physical resources. Yet, regular users and even expert administrators struggle to understand and tune MapReduce jobs to achieve good performance. This skill gap has given rise to a successful line of research on automatically tuning MapReduce parameters, originating several tuning advisors. Yet, the problem of automatically tuning SQL-on-Hadoop queries remains largely unexplored today as the current approach of applying MapReduce tuning advisors direct to SQL-on-Hadoop queries entail a number of problems. For instance, the Hive SQL-on-Hadoop engine compiles HiveQL queries into a workflow of MapReduce jobs, and it would be straightforward to assume that by tuning the underlying Hadoop processing engine, HiveQL queries would benefit as well. However, this assumption does not hold when existing tuning advisors are naively applied to HiveQL queries due to the design choices of Hive, Hadoop, and the tuning advisors. This thesis addresses the question of how to properly tune SQL-on-Hadoop queries? By "properly" we mean, when tuning SQL-on-Hadoop queries, the generation of the tuning setups has to consider several characteristics that are only present in jobs generated by SQL-on-Hadoop systems. These characteristics include: (i) at the level of individual queries, all MapReduce jobs that constitute a query plan are executed with identical configuration settings. (ii) despite profiling and search heuristics being performed in a job-basis to generate tuning setups, only one tuning setup is applied to the query and the remaining tuning setups are simply discarded. (iii) Hadoop tuning advisors treat the MapReduce functions as black boxes and make simplifying modeling assumptions that may hold for classical MapReduce jobs (Sort, Grep), but they are not true for SQL-like queries like HiveQL where jobs contain multiple relational algebra operators like joins and aggregators. We extended the Hive query processor for tune SQL-on-Hadoop queries. This extension comprises an approach called non-uniform tuning that enables SQL-on-Hadoop systems to have a fine-grained control for tuning queries, where jobs receive specialized tuning setups. We present a conceptual model, called code-signature, that uses static information available upfront execution to match jobs with similar resource consumption patterns. We also present a tuning cache that stores tuning setups, generated by third part tuning advisors, and recycle them between jobs that have the similar resource consumption. The extension works together as a single solution for automatic tuning of SQL-on-Hadoop queries. In order to validate our solution, we conduct an experimental study focused on Hive over Hadoop because (i) Hive is a good representative of native SQL-on-Hadoop systems (like System-R did for relational database systems); (ii) both Hive and Hadoop are highly popular for analytical processing; and (iii) Hadoop parameter tuning has been studied extensively in recent years. For populate the Tuning Cache, we employ Starfish, the first cost-based optimizer for finding (near-) optimal configuration parameter settings and the only publicly available tuning advisor for academic research purposes. In our experiments, we present that queries optimized with our tuning approach always presented positive speed ups up to 25%, contrasting the current approach that degraded performance in several occasions. Specifically, the current tuning approach can cause variations in the execution run time between -171% and 27% over default configuration. Most importantly, our tuning method leads to considerable better resource utilization, decreasing CPU usage and Memory paging over 40%. Also reducing the total amount of data written to disks in 5×. Our tuning approach has a Tuning Cache used to avoid reprofiling similar jobs. Our Tuning Cache reduced the profilings in 50% for TPC-H queries, enabling upfront tuning of ad-hoc queries. Keywords: Physical-layer tuning. MapReduce query processing. SQL-On-Hadoop

    Técnicas avanzadas de predicción para big data en el contexto de smart cities

    Get PDF
    Programa de Doctorado en Biotecnología, Ingeniería y Tecnología QuímicaLínea de Investigación: Ingeniería InformáticaClave Programa: DBICódigo Línea: 19Cada día se recoge más y más información de cualquier ámbito de nuestra vida. Número de pasos por minuto, contaminación en las principales ciudades del mundo o el consumo eléctrico medido cada cierto tiempo son sólo algunos ejemplos. Es en este ámbito donde surgen las Smart Cities, o ciudades conectadas, donde se recaba toda la información posible de diferentes dispositivos IoT repartidos por la misma con la esperanza de descubrir conocimiento en dichos datos e, incluso, predecir ciertos comportamientos futuros. Pero estas nuevas series temporales que se están creando comienzan a exceder los tamaños hasta ahora tenidos en cuenta, empezando a considerarse por tanto Big Data. Las técnicas de machine learning y minería de datos que hasta ahora ofrecían buenos resultados, no podían gestionar tal cantidad de información. Es por ello que necesitaban ser revisadas. Así, surge este trabajo de investigación, donde se propone un algoritmo de predicción basado en vecinos cercanos, para predecir series temporales Big Data. Para ello, apoyándose en nuevos frameworks de análisis de datos como Apache Spark con la computación distribuida como bandera, se proponen dos algoritmos: uno basado en el kWNN para análisis y predicción de series temporales univariante y el MV-kWNN en su versión multivariante. Se detalla en este trabajo los pasos realizados para adaptarlo a la computación distribuida y los resultados obtenidos tras llevar a cabo la predicción sobre los datos de consumo eléctrico de 3 edificios de una universidad pública. Se muestra, así mismo, las mejoras introducidas al algoritmo para seleccionar de forma óptima los parámetros requeridos por el mismo, estos son: el número de valores pasados que hay que usar (w) para predecir los h valores siguientes y el número de vecinos cercanos k a considerar para la predicción. También se valoran diferentes tamaños de horizontes de predicción h como dato de entrada al algoritmo. Se comprueba la validez de dichas mejoras realizando la predicción sobre una serie temporal el doble de grande que la considerada en primer término, en este caso la demanda eléctrica en España recogida durante 9 años. Las baja tasa de error obtenida demuestra la idoneidad del algoritmo, y su comparación con otros métodos como deep learning o árboles de regresión, así lo reafirman. Distintas pruebas sobre la escalabilidad del algoritmo en un clúster con diferentes configuraciones muestran lo importante que es escoger adecuadamente parámetros como el número de cores a utilizar por máquina, el número de particiones en que dividir el conjunto de datos así como el número de máquinas en un clúster. Para finalizar, se propone un nuevo algoritmo para tener en cuenta no sólo una variable, sino varias series exógenas que pudieran mejorar la predicción final. Llevando a cabo diferentes análisis basados en correlación, se define el grado mínimo que deben cumplir las series para mejorar dicha predicción. Experimentaciones sobre dos series reales, de demanda eléctrica en España y del precio de la electricidad durante el mismo periodo, son llevadas a cabo, alcanzando de nuevo bajas tasas de error. La comparación con otros métodos multivariantes, como los de redes neuronales o random forests, sitúan al método propuesto en el primer lugar por delante de estos. Una última experimentación para confirmar la adecuación del algoritmo a series temporales Big Data es realizada, mostrando los tiempos de ejecución multiplicando hasta por 200 el tamaño original de las series.Universidad Pablo de Olavide de Sevilla. Departamento de Deporte e InformáticaPostprin
    corecore