39 research outputs found

    Learning the impact of data pre-processing in data analysis

    Get PDF
    Cotutela Universitat Politècnica de Catalunya i Poznan University of TechnologyThere is a clear correlation between data availability and data analytics, and hence with the increase of data availability --- unavoidable according to Moore's law, the need for data analytics increases too. This certainly engages many more people, not necessarily experts, to perform analytics tasks. However, the different, challenging, and time consuming steps of the data analytics process, overwhelm non-experts and they require support (e.g., through automation or recommendations). A very important and time consuming step that marks itself out of the rest, is the data pre-processing step. Data pre-processing is challenging but at the same time has a heavy impact on the overall analysis. In this regard, previous works have focused on providing user assistance in data pre-processing but without being concerned on its impact on the analysis. Hence, the goal has generally been to enable analysis through data pre-processing and not to improve it. In contrast, this thesis aims at developing methods that provide assistance in data pre-processing with the only goal of improving (e.g., increasing the predictive accuracy of a classifier) the result of the overall analysis. To this end, we propose a method and define an architecture that leverages ideas from meta-learning to learn the relationship between transformations (i.e., pre-processing operators) and mining algorithms (i.e., classification algorithms). This eventually enables ranking and recommending transformations according to their potential impact on the analysis. To reach this goal, we first study the currently available methods and systems that provide user assistance, either for the individual steps of data analytics or for the whole process altogether. Next, we classify the metadata these different systems use and then specifically focus on the metadata used in meta-learning. We apply a method to study the predictive power of these metadata and we extract and select the metadata that are most relevant. Finally, we focus on the user assistance in the pre-processing step. We devise an architecture and build a tool, PRESISTANT, that given a classification algorithm is able to recommend pre-processing operators that once applied, positively impact the final results (e.g., increase the predictive accuracy). Our results show that providing assistance in data pre-processing with the goal of improving the result of the analysis is feasible and also very useful for non-experts. Furthermore, this thesis is a step towards demystifying the non-trivial task of pre-processing that is an exclusive asset in the hands of experts.Existe una clara correlación entre disponibilidad y análisis de datos, por tanto con el incremento de disponibilidad de datos --- inevitable según la ley de Moore, la necesidad de analizar datos se incrementa también. Esto definitivamente involucra mucha más gente, no necesariamente experta, en la realización de tareas analíticas. Sin embargo los distintos, desafiantes y temporalmente costosos pasos del proceso de análisis de datos abruman a los no expertos, que requieren ayuda (por ejemplo, automatización o recomendaciones). Uno de los pasos más importantes y que más tiempo conlleva es el pre-procesado de datos. Pre-procesar datos es desafiante, y a la vez tiene un gran impacto en el análisis. A este respecto, trabajos previos se han centrado en proveer asistencia al usuario en el pre-procesado de datos pero sin tener en cuenta el impacto en el resultado del análisis. Por lo tanto, el objetivo ha sido generalmente el de permitir analizar los datos mediante el pre-procesado y no el de mejorar el resultado. Por el contrario, esta tesis tiene como objetivo desarrollar métodos que provean asistencia en el pre-procesado de datos con el único objetivo de mejorar (por ejemplo, incrementar la precisión predictiva de un clasificador) el resultado del análisis. Con este objetivo, proponemos un método y definimos una arquitectura que emplea ideas de meta-aprendizaje para encontrar la relación entre transformaciones (operadores de pre-procesado) i algoritmos de minería de datos (algoritmos de clasificación). Esto, eventualmente, permite ordenar y recomendar transformaciones de acuerdo con el impacto potencial en el análisis. Para alcanzar este objetivo, primero estudiamos los métodos disponibles actualmente y los sistemas que proveen asistencia al usuario, tanto para los pasos individuales en análisis de datos como para el proceso completo. Posteriormente, clasificamos los metadatos que los diferentes sistemas usan y ponemos el foco específicamente en aquellos que usan metadatos para meta-aprendizaje. Aplicamos un método para estudiar el poder predictivo de los metadatos y extraemos y seleccionamos los metadatos más relevantes. Finalmente, nos centramos en la asistencia al usuario en el paso de pre-procesado de datos. Concebimos una arquitectura y construimos una herramienta, PRESISTANT, que dado un algoritmo de clasificación es capaz de recomendar operadores de pre-procesado que una vez aplicados impactan positivamente el resultado final (por ejemplo, incrementan la precisión predictiva). Nuestros resultados muestran que proveer asistencia al usuario en el pre-procesado de datos con el objetivo de mejorar el resultado del análisis es factible y muy útil para no-expertos. Además, esta tesis es un paso en la dirección de desmitificar que la tarea no trivial de pre-procesar datos esta solo al alcance de expertos.Postprint (published version

    PRESISTANT: Learning based assistant for data pre-processing

    Get PDF
    Data pre-processing is one of the most time consuming and relevant steps in a data analysis process (e.g., classification task). A given data pre-processing operator (e.g., transformation) can have positive, negative or zero impact on the final result of the analysis. Expert users have the required knowledge to find the right pre-processing operators. However, when it comes to non-experts, they are overwhelmed by the amount of pre-processing operators and it is challenging for them to find operators that would positively impact their analysis (e.g., increase the predictive accuracy of a classifier). Existing solutions either assume that users have expert knowledge, or they recommend pre-processing operators that are only "syntactically" applicable to a dataset, without taking into account their impact on the final analysis. In this work, we aim at providing assistance to non-expert users by recommending data pre-processing operators that are ranked according to their impact on the final analysis. We developed a tool PRESISTANT, that uses Random Forests to learn the impact of pre-processing operators on the performance (e.g., predictive accuracy) of 5 different classification algorithms, such as J48, Naive Bayes, PART, Logistic Regression, and Nearest Neighbor. Extensive evaluations on the recommendations provided by our tool, show that PRESISTANT can effectively help non-experts in order to achieve improved results in their analytical tasks

    Big data management

    Get PDF
    Aquest article pretén donar una visió general del que és la gestió de dades massives, la seva problemàtica i com s’han d’abordar les solucions. La principal dificultat és que no existeix una solució genèrica i s’ha de construir a mida de cada organització.This article presents an overview of what big data management is, the problems it poses and how the solutions should be approached. In this respect, the main difficulty is that there is no generic solution so big data management processes must be tailored to each organization

    Gestió de dades massives

    Get PDF
    Aquest article pretén donar una visió general del que és la gestió de dades massives, la seva problemàtica i com s’han d’abordar les solucions. La principal dificultat és que no existeix una solució genèrica i s’ha de construir a mida de cada organització.Aquest treball està parcialment finançat per la Comissió Europea sota l’acord 101093164, corresponent al projecte ExtremeXP.Peer ReviewedPostprint (published version

    A data-science pipeline to enable the Interpretability of Many-Objective Feature Selection

    Full text link
    Many-Objective Feature Selection (MOFS) approaches use four or more objectives to determine the relevance of a subset of features in a supervised learning task. As a consequence, MOFS typically returns a large set of non-dominated solutions, which have to be assessed by the data scientist in order to proceed with the final choice. Given the multi-variate nature of the assessment, which may include criteria (e.g. fairness) not related to predictive accuracy, this step is often not straightforward and suffers from the lack of existing tools. For instance, it is common to make use of a tabular presentation of the solutions, which provide little information about the trade-offs and the relations between criteria over the set of solutions. This paper proposes an original methodology to support data scientists in the interpretation and comparison of the MOFS outcome by combining post-processing and visualisation of the set of solutions. The methodology supports the data scientist in the selection of an optimal feature subset by providing her with high-level information at three different levels: objectives, solutions, and individual features. The methodology is experimentally assessed on two feature selection tasks adopting a GA-based MOFS with six objectives (number of selected features, balanced accuracy, F1-Score, variance inflation factor, statistical parity, and equalised odds). The results show the added value of the methodology in the selection of the final subset of features.Comment: 8 pages, 5 figures, 6 table

    Wrapper methods for multi-objective feature selection

    Get PDF
    The ongoing data boom has democratized the use of data for improved decision-making. Beyond gathering voluminous data, preprocessing the data is crucial to ensure that their most rele- vant aspects are considered during the analysis. Feature Selection (FS) is one integral step in data preprocessing for reducing data dimensionality and preserving the most relevant features of the data. FS can be done by inspecting inherent associations among the features in the data (filter methods) or using the model per- formance of a concrete learning algorithm (wrapper methods). In this work, we extensively evaluate a set of FS methods on 32 datasets and measure their effect on model performance, stability, scalability and memory usage. The results re-establish the superiority of wrapper methods over filter methods in model performance. We further investigate the unique role of wrapper methods in multi-objective FS with a focus on two traditional metrics - accuracy and Area Under the ROC Curve (AUC). On model performance, our experiments showed that optimizing for both metrics simultaneously, rather than using a single metric, led to improvements in the accuracy and AUC trade-off1 up to 5% and 10%, respectively.The project leading to this publication has received funding from the European Commission under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 955895). Besim Bilalli is partly supported by the Spanish Ministerio de Ciencia e Innovación, as well as the European Union-Next Generation EU, under the project FJC 2021-046606-I/AEI/10.13039/501100011033. Gianluca Bontempi was supported by Service Public de Wallonie Recherche undergrant n°2010235–ARIAC by DIGITALWALLONIA4.AI.Peer ReviewedPostprint (published version

    Intelligent assistance for data pre-processing

    Get PDF
    A data mining algorithm may perform differently on datasets with different characteristics, e.g., it might perform better on a dataset with continuous attributes rather than with categorical attributes, or the other way around. Typically, a dataset needs to be pre-processed before being mined. Taking into account all the possible pre-processing operators, there exists a staggeringly large number of alternatives. As a consequence, non-experienced users become overwhelmed with pre-processing alternatives. In this paper, we show that the problem can be addressed by automating the pre-processing with the support of meta-learning. To this end, we analyzed a wide range of data pre-processing techniques and a set of classification algorithms. For each classification algorithm that we consider and a given dataset, we are able to automatically suggest the transformations that improve the quality of the results of the algorithm on the dataset. Our approach will help non-expert users to more effectively identify the transformations appropriate to their applications, and hence to achieve improved results.Postprint (author's final draft

    Resilient store: a heuristic-based data format selector for intermediate results

    Get PDF
    The final publication is available at link.springer.comLarge-scale data analysis is an important activity in many organizations that typically requires the deployment of data-intensive workflows. As data is processed these workflows generate large intermediate results, which are typically pipelined from one operator to the following. However, if materialized, these results become reusable, hence, subsequent workflows need not recompute them. There are already many solutions that materialize intermediate results but all of them assume a fixed data format. A fixed format, however, may not be the optimal one for every situation. For example, it is well-known that different data fragmentation strategies (e.g., horizontal and vertical) behave better or worse according to the access patterns of the subsequent operations. In this paper, we present ResilientStore, which assists on selecting the most appropriate data format for materializing intermediate results. Given a workflow and a set of materialization points, it uses rule-based heuristics to choose the best storage data format based on subsequent access patterns.We have implemented ResilientStore for HDFS and three different data formats: SequenceFile, Parquet and Avro. Experimental results show that our solution gives 18% better performance than any solution based on a single fixed format.Peer ReviewedPostprint (author's final draft

    Learning fishing information from AIS data

    Get PDF
    The Automatic Identification System (AIS) allows vessels to emit their position, speed and course while sailing. By international law, all larges vessels (e.g., bigger than 15m in Europe) are required to provide such data. The abundance and free availability of AIS data has created a huge interest in analyzing them (e.g., to look for patterns of how ships move, detailed knowledge about sailing routes, etc.). In this paper, we use AIS data to classify areas (i.e., spatial cells) of the South Atlantic Ocean as productive or unproductive in terms of the quantity of squid that can be caught. Next, together with daily satellite data about the area, we create a training dataset where a model is learned to predict whether an area of the Ocean is productive or not. Finally, real fishing data are used to evaluate the model. As a result, for blind movements (i.e., with no information about real catches in the previous days), our model trained on data generated from AIS obtains a precision that is 18% higher than the model trained on actual fishing data-this is due to AIS data being larger in volume than fishing data, and 36% higher than the precision of the actual decisions of the ships studied. The results show that despite their simplicity, AIS data have potential value in building training datasets in this domain.Peer ReviewedPostprint (author's final draft

    Operationalizing and automating data governance

    Get PDF
    The ability to cross data from multiple sources represents a competitive advantage for organizations. Yet, the governance of the data lifecycle, from the data sources into valuable insights, is largely performed in an ad-hoc or manual manner. This is specifically concerning in scenarios where tens or hundreds of continuously evolving data sources produce semi-structured data. To overcome this challenge, we develop a framework for operationalizing and automating data governance. For the first, we propose a zoned data lake architecture and a set of data governance processes that allow the systematic ingestion, transformation and integration of data from heterogeneous sources, in order to make them readily available for business users. For the second, we propose a set of metadata artifacts that allow the automatic execution of data governance processes, addressing a wide range of data management challenges. We showcase the usefulness of the proposed approach using a real world use case, stemming from the collaborative project with the World Health Organization for the management and analysis of data about Neglected Tropical Diseases. Overall, this work contributes on facilitating organizations the adoption of data-driven strategies into a cohesive framework operationalizing and automating data governance.This work was partly supported by the DOGO4ML project, funded by the Spanish Ministerio de Ciencia e Innovación under project PID2020-117191RB-I00/AEI/10.13039/501100011033. Sergi Nadal is partly supported by the Spanish Ministerio de Ciencia e Innovación, as well as the European Union - NextGenerationEU, under project FJC2020-045809-I/AEI/10.13039/501100011033.Peer ReviewedPostprint (published version
    corecore