373 research outputs found

    Big Data Testing Techniques: Taxonomy, Challenges and Future Trends

    Full text link
    Big Data is reforming many industrial domains by providing decision support through analyzing large data volumes. Big Data testing aims to ensure that Big Data systems run smoothly and error-free while maintaining the performance and quality of data. However, because of the diversity and complexity of data, testing Big Data is challenging. Though numerous research efforts deal with Big Data testing, a comprehensive review to address testing techniques and challenges of Big Data is not available as yet. Therefore, we have systematically reviewed the Big Data testing techniques evidence occurring in the period 2010-2021. This paper discusses testing data processing by highlighting the techniques used in every processing phase. Furthermore, we discuss the challenges and future directions. Our findings show that diverse functional, non-functional and combined (functional and non-functional) testing techniques have been used to solve specific problems related to Big Data. At the same time, most of the testing challenges have been faced during the MapReduce validation phase. In addition, the combinatorial testing technique is one of the most applied techniques in combination with other techniques (i.e., random testing, mutation testing, input space partitioning and equivalence testing) to find various functional faults through Big Data testing.Comment: 32 page

    Semantic Support for Log Analysis of Safety-Critical Embedded Systems

    Full text link
    Testing is a relevant activity for the development life-cycle of Safety Critical Embedded systems. In particular, much effort is spent for analysis and classification of test logs from SCADA subsystems, especially when failures occur. The human expertise is needful to understand the reasons of failures, for tracing back the errors, as well as to understand which requirements are affected by errors and which ones will be affected by eventual changes in the system design. Semantic techniques and full text search are used to support human experts for the analysis and classification of test logs, in order to speedup and improve the diagnosis phase. Moreover, retrieval of tests and requirements, which can be related to the current failure, is supported in order to allow the discovery of available alternatives and solutions for a better and faster investigation of the problem.Comment: EDCC-2014, BIG4CIP-2014, Embedded systems, testing, semantic discovery, ontology, big dat

    Adaptive Modelling and Control in Distributed Systems

    No full text
    International audienceCompanies have growing amounts of data to store and to process. In response to these new processing challenges, Google developed MapReduce, a parallel programming paradigm which is becoming the major tool for BigData treatment. Even if MapReduce is used by most IT companies, ensuring its performances while minimizing costs is a real challenge requiring a high level of expertise. Modelling and control of MapReduce have been developed in the last years, however there are still many problems caused by the software's high variability. To tackle the latter issue, this paper proposes an on-line model estimation algorithm for MapReduce systems. An adaptive control strategy is developed and implemented to guarantee response time performances under a concurrent workload while minimizing resource use. Results have been validated using a 40 nodes MapReduce cluster under a data intensive Business Intelligence workload running on Grid5000, a French national cloud. The experiments show that the adaptive control algorithm manages to guarantee performances and low costs even in a highly variable environment

    Uma abordagem para o teste de dependabilidade de sistemas MapReduce com base em casos de falha representativos

    Get PDF
    Resumo: Os sistemas MapReduce facilitam a utilização de um grande número de máquinas para processar uma grande quantidade de dados, e têm sido utilizados por diversas aplicações, que incluem desde ferramentas de pesquisa até sistemas comerciais e financeiros. Uma das principais características dos sistemas MapReduce é abstrair problemas relacionados ao ambiente distribuído, tais como a distribuição do processamento e a tolerância a falhas. Com isso, torna-se imprescindível garantir a dependabilidade dos sistemas MapReduce, ou seja, garantir que esses sistemas funcionem corretamente mesmo na presença de falhas. Por outro lado, a falta de determinismo de um ambiente distribuído e a falta de confiabilidade do ambiente físico, podem gerar erros nos sistemas MapReduce que sejam difíceis de serem encontrados, entendidos e corrigidos. Esta tese apresenta a primeira abordagem conhecida para o teste de dependabilidade para sistemas MapReduce. Este trabalho apresenta uma definição para o teste de dependabilidade, uma modelagem do mecanismo de tolerância a falhas do MapReduce, um processo para gerar casos de falha representativos a partir de um modelo, e uma plataforma de teste para automatizar a execução de casos de falha em um ambiente distribuído. Este trabalho ainda apresenta uma nova abordagem para modelar componentes distribuídos usando redes de Petri. Essa nova abordagem permite representar a dinâmica dos componentes e a independência de suas ações e estados. Resultados experimentais são apresentados e mostram que os casos de falha gerados a partir do modelo são representativos para o teste do sistema Hadoop, principal implementação de código aberto do MapReduce. Através dos experimentos, diversos erros são encontrados no Hadoop, e os resultados também comprovam que a plataforma de teste automatiza a execução dos casos de falha representativos. Além disso, a plataforma apresenta as propriedades requeridas para uma plataforma de teste, que são a controlabilidade, medição temporal, não-intrusividade, repetibilidade, e a eficácia na identificação de sistemas com erros

    La modélisation et le contrôle des services BigData : application à la performance et la fiabilité de MapReduce

    Get PDF
    The amount of raw data produced by everything from our mobile phones, tablets, computers to our smart watches brings novel challenges in data storage and analysis. Many solutions have arisen in the industry to treat these large quantities of raw data, the most popular being the MapReduce framework. However, while the deployment complexity of such computing systems is steadily increasing, continuous availability and fast response times are still the expected norm. Furthermore, with the advent of virtualization and cloud solutions, the environments where these systems need to run is becoming more and more dynamic. Therefore ensuring performance and dependability constraints of a MapReduce service still poses significant challenges. In this thesis we address this problematic of guaranteeing the performance and availability of MapReduce based cloud services, taking an approach based on control theory. We develop the first dynamic models of a MapReduce service running a concurrent workload. Furthermore, we develop several control laws to ensure different quality of service objectives. First, classical feedback and feedforward controllers are developed to guarantee service performance. To further adapt our controllers to the cloud, such as minimizing the number of reconfigurations and costs, a novel event-based control architecture is introduced for performance management. Finally we develop the optimal control architecture MR-Ctrl, which is the first solution to provide guarantees in terms of both performance and dependability for MapReduce systems, meanwhile keeping cost at a minimum. All the modeling and control approaches are evaluated both in simulation and experimentally using MRBS, a comprehensive benchmark suite for evaluating the performance and dependability of MapReduce systems. Validation experiments were run in a real 60 node Hadoop MapReduce cluster, running a data intensive Business Intelligence workload. Our experiments show that the proposed techniques can successfully guarantee performance and dependability constraints.Le grand volume de données généré par nos téléphones mobiles, tablettes, ordinateurs, ainsi que nos montres connectées présente un défi pour le stockage et l'analyse. De nombreuses solutions ont émergées dans l'industrie pour traiter cette grande quantité de données, la plus populaire d'entre elles est MapReduce. Bien que la complexité de déploiement des systèmes informatiques soit en constante augmentation, la disponibilité permanente et la rapidité du temps de réponse sont toujours une priorité. En outre, avec l'émergence des solutions de virtualisation et du cloud, les environnements de fonctionnement sont devenus de plus en plus dynamiques. Par conséquent, assurer les contraintes de performance et de fiabilité d'un service MapReduce pose un véritable challenge. Dans cette thèse, les problématiques de garantie de la performance et de la disponibilité de services de cloud MapReduce sont abordées en utilisant une approche basée sur la théorie du contrôle. Pour commencer, plusieurs modèles dynamiques d'un service MapReduce exécutant simultanément de multiples tâches sont introduits. Par la suite, plusieurs lois de contrôle assurant les différents objectifs de qualités de service sont synthétisées. Des contrôleurs classiques par retour de sortie avec feedforward garantissant les performances de service ont d'abord été développés. Afin d'adapter nos contrôleurs au cloud, tout en minimisant le nombre de reconfigurations et les coûts, une nouvelle architecture de contrôle événementiel a été mise en œuvre. Finalement, l'architecture de contrôle optimal MR-Ctrl a été développée. C'est la première solution à fournir aux systèmes MapReduce des garanties en termes de performances et de disponibilité, tout en minimisant le coût. Les approches de modélisation et de contrôle ont été évaluées à la fois en simulation, et en expérimentation sous MRBS, qui est une suite de tests complète pour évaluer la performance et la fiabilité des systèmes MapReduce. Les tests ont été effectuées en ligne sur un cluster MapReduce de 60 nœuds exécutant une tâche de calcul intensive de type Business Intelligence. Nos expériences montrent que le contrôle ainsi conçu, peut garantir les contraintes de performance et de disponibilité

    Towards Quality-Aware Development of Big Data Applications with DICE

    No full text
    © Springer International Publishing Switzerland 2016.Model-driven engineering (MDE) has been extended in recent years to account for reliability and performance requirements since the early design stages of an application. While this quality-aware MDE exists for both enterprise and cloud applications, it does not exist yet for Big Data systems. DICE is a novel Horizon2020 project that aims at filling this gap by defining the first quality-driven MDE methodology for Big Data applications. Concrete outputs of the project will include a data-aware UML profile capable of describing Big Data technologies and architecture styles, data-aware quality prediction methods, and continuous delivery tools
    • …
    corecore