14 research outputs found

    Extraction transformation load (ETL) solution for data integration: a case study of rubber import and export information

    Get PDF
    Data integration is important in consolidating all the data in the organization or outside the organization to provide a unified view of the organization's information. Extraction Transformation Load (ETL) solution is the back-end process of data integration which involves collecting data from various data sources, preparing and transforming the data according to business requirements and loading them into a Data Warehouse (DW). This paper explains the integration of the rubber import and export data between Malaysian Rubber Board (MRB) and Royal Malaysian Customs Department (Customs) using the ETL solution. Microsoft SQL Server Integration Services (SSIS) and Microsoft SQL Server Agent Jobs have been used as the ETL tool and ETL scheduling

    THE IMPORTANCE OF GOVERNANCE IN THE IMPLANTATION OF A PROJECT BUSINESS INTELLIGENCE APPROPRIATE TO THE REQUIREMENTS OF THE LAW SARBANES-OXLEY

    Get PDF
    “The Importance of Governance in the Implantation of a Project Business Intelligence Appropriate to the Requirements of the Law Sarbanes-Oxley

    Business intelligence-centered software as the main driver to migrate from spreadsheet-based analytics

    Get PDF
    Internship Report presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business IntelligenceNowadays, companies are handling and managing data in a way that they weren’t ten years ago. The data deluge is, as a mere consequence of that, the constant day-to-day challenge for them - having to create agile and scalable data solutions to tackle this reality. The main trigger of this project was to support the decision-making process of a customer-centered marketing team (called Customer Voice) in the Company X by developing a complete, holistic Business Intelligence solution that goes all the way from ETL processes to data visualizations based on that team’s business needs. Having this context into consideration, the focus of the internship was to make use of BI, ETL techniques to migrate their data stored in spreadsheets — where they performed data analysis — and shift the way they see the data into a more dynamic, sophisticated, and suitable way in order to help them make data-driven strategic decisions. To ensure that there was credibility throughout the development of this project and its subsequent solution, it was necessary to make an exhaustive literature review to help me frame this project in a more realistic and logical way. That being said, this report made use of scientific literature that explained the evolution of the ETL workflows, tools, and limitations across different time periods and generations, how it was transformed from manual to real-time data tasks together with data warehouses, the importance of data quality and, finally, the relevance of ETL processes optimization and new ways of approaching data integrations by using modern, cloud architectures

    Supply chain business intelligence: model proposal and implementation to support the online sales supply chain end to end operation of a portuguese electronics retail company

    Get PDF
    Project Work presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business IntelligenceIn today’s highly competitive business environment, the adoption of Supply Chain Management is seen as an advantage. It provides not only effective integration, but also cooperation within the supply chain. However, in order to achieve further integration, other practices are needed. With growing volumes of data, businesses are required to ensure its appropriate flow, integration and analysis. This project, named “Supply Chain Business Intelligence ‐ Model proposal and implementation to support the online sales supply chain end to end operation of a Portuguese electronics retail company” had as its main goal the development of a conceptual model of a Business Intelligence system to address the needs of an online sales supply chain end to end operation. The proposed model should not focus on a specific company. Instead, it should provide a solution for other similar problems. The project starts with the definition of the problem, objectives and methodology. It is then followed by the literature review, which consists of a thorough research to identify best practices and previous works in the literature that dealt with similar problems. The research focuses on three main topics: Supply Chain Management, Internet Retail Industry and Business Intelligence. A conceptual model is then developed, which consists of four main steps: definition of the overall requirements, metrics, data mart model and dashboards. For the data mart model, it is important to identify the business process, the appropriate granularity and respective dimensions and fact tables. It is then followed by a case study, which consists in the implementation of the model to solve Company X’s problem. As outputs of the project, both data mart and dashboards are considered, since they are part of the artifact needed to achieve the business requirements. Finally, a discussion and evaluation of the results is conducted. Even though the implementation part of the project presented some challenges, the final solution still showed improvements for Company X and proved to be appropriate for the provided business requirements. Limitations and possible aspects of improvements are presented in the last chapter of the project

    Temporal and Contextual Dependencies in Relational Data Modeling

    Get PDF
    Although a solid theoretical foundation of relational data modeling has existed for decades, critical reassessment from temporal requirements’ perspective reveals shortcomings in its integrity constraints. We identify the need for this work by discussing how existing relational databases fail to ensure correctness of data when the data to be stored is time sensitive. The analysis presented in this work becomes particularly important in present times where, because of relational databases’ inadequacy to cater to all the requirements, new forms of database systems such as temporal databases, active databases, real time databases, and NoSQL (non-relational) databases have been introduced. In relational databases, temporal requirements have been dealt with either at application level using scripts or through manual assistance, but no attempts have been made to address them at design level. These requirements are the ones that need changing metadata as the time progresses, which remains unsupported by Relational Database Management System (RDBMS) to date. Starting with shortcomings of data, entity, and referential integrity in relational data modeling, we propose a new form of integrity that works at a more detailed level of granularity. We also present several important concepts including temporal dependency, contextual dependency, and cell level integrity. We then introduce cellular-constraints to implement the proposed integrity and dependencies, and also how they can be incorporated into the relational data model to enable RDBMS to handle temporal requirements in future. Overall, we provide a formal description to address the temporal requirements’ problem in relational data model, and design a framework for solving this problem. We have supplemented our proposition using examples, experiments and results

    Desarrollo de una solución de inteligencia de negocios para el control académico, económico y deserción en un instituto técnico

    Get PDF
    La deserción estudiantil es un problema importante en toda institución educativa, sobre todo si no se tiene control y seguimiento al estudiante y más cuando la información se encuentra en diferentes plataformas, este problema tiene diferentes enfoques según el nivel de educación. En un instituto técnico privado, se reportan problemas de pérdidas de ingresos económicos y una mala imagen y disminución del posicionamiento en el mercado, ante ello la institución optó por emplear un sistemas de información que muestre a nivel gerencial y operacional, controles y cuadros de mando que indiquen el estado de los estudiantes, agrupando en un solo repositorio de datos (data warehouse) bajo una solución de inteligencia de Negocios, reuniendo así los datos de las diferentes plataformas que tiene la institución y son requeridos para este control, mostrando así el rendimiento académico, económico y la predicción de deserción, uniendo la información bajo la metodología de Ralph Kimball y presentándola en esta solución con el fin de tomar acciones ante el riesgo de deserción de cada alumno, identificando desde temprano los diversos factores que pueden conllevar al retiro de un estudiante, con este análisis las áreas académicas realizan el apoyo correspondiente según los factores que muestren más altos riesgo de deserción, logrando la retención de la mayor cantidad de alumnos que se encontraban en riesgo de deserción

    Desarrollo de Inteligencia de Negocios con un modelo de Machine Learning para la gestión de la flota vehicular

    Get PDF
    La presente investigación se llevó a cabo con el fin de comprobar el resultado del desarrollo de inteligencia de negocios con un modelo de machine learning para la gestión de la flota vehicular, el cual ha sido realizado en base a la información de la operatividad de la flota vehicular. Esta investigación fue de tipo aplicada, mediante un diseño experimental, específicamente del tipo cuasi experimental. La población utilizada, estuvo compuesta por la información de la operatividad de la flota vehicular recopilada mes a mes. La recopilación de datos fue realizada a través de la ficha de observación. Los resultados de la presente investigación confirman que el desarrollo de inteligencia de negocios con un modelo de machine learning tuvo un efecto realmente positivo para la gestión de la flota vehicular; en cuanto al porcentaje del costo por consumo de combustible antes fue de 0.93392 y después se redujo a 0.91644, el porcentaje del costo por infracción antes fue 0.0023514 y después se redujo a 0.002196, el porcentaje del costo por lavado antes fue 0.014362 y después se redujo a 0.014250, el porcentaje del costo por mantenimiento antes fue 0.97023 y después se redujo a 0.94063, el porcentaje del costo por parchado de llantas antes fue 0.0011252 y después se redujo a 0.0010928, el rendimiento antes fue 37.2 y después aumento a 37.3

    Data Warehouse: modelo de auditoria e controlo interno

    Get PDF
    O objectivo central desta investigação consiste em apresentar um modelo que permita avaliar eficazmente o sistema de controlo interno e auditoria de um Data Warehouse (DW). Para que seja facilmente aceite por auditores e auditados, propõe-se que a construção deste modelo tenha como referência os principais standards de controlo interno especialmente concebidos para avaliar tecnologias de informação (TI), nomeadamente as metodologias do CobiT® 4.1, do ITIL® V3 e do ISO/IEC 27002. Assim, a presente investigação assentará em três pilares: I. O Estado da Arte de: a. Data Warehouse Os componentes desta arquitectura são o principal objecto de avaliação; b. Controlo Interno e Auditoria Interna Meios para conseguir optimizar a prestação do DW; c. Modelo de governação das tecnologias de informação (IT-Governance) Resumo das melhores e mais recentes práticas internacionais de controlo e gestão de tecnologias e sistemas de informação. II. Desenvolvimento de um modelo de análise e avaliação do controlo interno das componentes e fases do Data Warehouse III. Validação do modelo através de um estudo de caso constituído por uma acção de auditoria ao Data Warehouse de uma Instituição financeira. As conclusões da investigação incidirão sobre as vantagens que as organizações poderão obter se utilizarem um modelo específico para avaliar e controlar a gestão do Data Warehouse.The main purpose of this research is to present a model to effectively assess the internal control of Data Warehouse. For this template to be easily accepted by auditors and audited, we propose the use of a reference set of control standards and methodologies specially conceived to assess information technology environments, such as, CobiT® 4.1, ITIL® V3 and ISO/IEC 27002. Therefore, this research will be based on three pillars: I. The State of the Art: a. Data Warehouse The components of this architecture are the primary object of evaluation; b. Internal Control and Audit Means to achieve optimum performance of a DW; c. IT Governance Summary of the most recent and international accepted best practices to control and manage IT environment II. Development of a framework to assess the DW internal control, all its components and stages III. Model validation through a case study comprising an internal auditory action to a Financial Institution Data Warehouse environment The research findings will focus on the advantages that organizations may obtain from using a specific template to assess and control the Data Warehouse management
    corecore