20 research outputs found

    ETL Testing Analyzer

    Get PDF
    ETL testing techniques are nowadays widely used into the data integration process. These techniques refer to the ability of being able to check whether the information that is loaded into the data warehouse has correctly followed all the transformations steps. Due to the errors that might occur during the extraction, transformation and load stages, there is a need to monitor and handle the errors that can cause severe data quality issues into the data warehouse. This thesis paper is based on the information gathered from a previous project that was performed at UPC. The main goal of this project was to help the professors from UPC who are teaching the ¿Database¿ course, to better analyze the performance of the students. Therefore, an ETL system was implemented with the goal of extracting student information from multiple sources, transform and load it into the data warehouse. This information can refer to the student¿s personal data, the exercises they are performing, etc. The initial ETL design was based on the creation of the data warehouse schema containing the main dimensions and fact tables. The main issue is that it did not present any monitoring and error handling functionalities, even though the system was generating several errors every time the ETL was executed. The steps I have followed while working on this thesis project have been to model the initial ETL process using a BPMN representation, include error handling and monitoring functionalities and ultimately redesign the initial ETL processes using a chosen tool. Although the initial processes were modelled using Pentaho Kettle, due to the new requirements regarding the error handling and monitoring capabilities I had to perform a comprehensive ETL tool comparison to check what is the tool that can better answer the requirements of this project

    18th SC@RUG 2020 proceedings 2020-2021

    Get PDF

    18th SC@RUG 2020 proceedings 2020-2021

    Get PDF

    18th SC@RUG 2020 proceedings 2020-2021

    Get PDF

    Parallelizing user–defined functions in the ETL workflow using orchestration style sheets

    No full text
    Today’s ETL tools provide capabilities to develop custom code as user-defined functions (UDFs) to extend the expressiveness of the standard ETL operators. However, while this allows us to easily add new functionalities, it also comes with the risk that the custom code is not intended to be optimized, e.g., by parallelism, and for this reason, it performs poorly for data-intensive ETL workflows. In this paper we present a novel framework, which allows the ETL developer to choose a design pattern in order to write parallelizable code and generates a configuration for the UDFs to be executed in a distributed environment. This enables ETL developers with minimum expertise in distributed and parallel computing to develop UDFs without taking care of parallelization configurations and complexities. We perform experiments on large-scale datasets based on TPC-DS and BigBench. The results show that our approach significantly reduces the effort of ETL developers and at the same time generates efficient parallel configurations to support complex and data-intensive ETL tasks

    Parallelizing user–defined functions in the ETL workflow using orchestration style sheets

    No full text
    Today’s ETL tools provide capabilities to develop custom code as user-defined functions (UDFs) to extend the expressiveness of the standard ETL operators. However, while this allows us to easily add new functionalities, it also comes with the risk that the custom code is not intended to be optimized, e.g., by parallelism, and for this reason, it performs poorly for data-intensive ETL workflows. In this paper we present a novel framework, which allows the ETL developer to choose a design pattern in order to write parallelizable code and generates a configuration for the UDFs to be executed in a distributed environment. This enables ETL developers with minimum expertise in distributed and parallel computing to develop UDFs without taking care of parallelization configurations and complexities. We perform experiments on large-scale datasets based on TPC-DS and BigBench. The results show that our approach significantly reduces the effort of ETL developers and at the same time generates efficient parallel configurations to support complex and data-intensive ETL tasks
    corecore