872 research outputs found

    Maintaining Internal Consistency of Report for Real-time OLAP with Layer-based View

    Get PDF
    Maintaining internal consistency of report is an important aspect in the field of real-time data warehouses. OLAP and Query tools were initially designed to operate on top of unchanging, static historical data. In a real-time environment, however, the results they produce are usually negatively influenced by data changes concurrent to query execution, which may result in some internal report inconsistency. In this paper, we propose a new method, called layer-based view approach, to appropriately and effectively maintain report data consistency. The core idea is to prevent the data involved in an OLAP query from being changed through using lock mechanism, and avoid the confliction between read and write operations with the help of layer mechanism. Our approach can effectively deal with report consistency issue, while at the same time avoiding the query contention between read and write operations under real-time OLAP environment

    Managing Metadata in Data Warehouses: Pitfalls and Possibilities

    Get PDF
    This paper motivates a comprehensive academic study of metadata and the roles that metadata plays in organizational information systems. While the benefits of metadata and challenges in implementing metadata solutions are widely addressed in practitioner publications, explicit discussion of metadata in academic literature is rare. Metadata, when discussed, is perceived primarily as a technology solution. Integrated management of metadata and its business value are not well addressed. This paper discusses both the benefits offered by and the challenges associated with integrating metadata. It also describes solutions for addressing some of these challenges. The inherent complexity of an integrated metadata repository is demonstrated by reviewing the metadata functionality required in a data warehouse: a decision support environment where its importance is acknowledged. Comparing this required functionality with metadata management functionalities offered by data warehousing software products identifies crucial gaps. Based on these analyses, topics for further research on metadata are proposed

    Survey Report on Real-time Data Warehouses

    Get PDF
    èŻ„æŠ„ć‘Šæ˜Ż2006ćčŽ10æœˆæž—ć­é›šćœšćŒ—äșŹć€§ć­Šæ•°æźćș“ćźžéȘŒćź€æ”»èŻ»ćšćŁ«ć­ŠäœæœŸé—Žćˆ¶äœœçš„。Outline: Project introduction;Real-Time Data Warehousing: Challenges and Solutions;Our Research Work;Reference

    Business Intelligence and Developing Management Reporting : Action research in a case organization

    Get PDF
    Business Intelligence (BI) eli liiketoimintatiedon hallinta on prosessi, joka viittaa kaikkeen olennaiseen tietoon, jota organisaatiot jÀrjestelmÀllisesti kerÀÀvÀt ja analysoivat pÀÀtöksenteon parantamiseksi. Liiketoimintatiedon hallinnan tavoitteena on kerÀtÀ, analysoida, kÀsitellÀ, tallentaa ja toimittaa reaaliaikaista tietoa organisaation tarpeisiin sekÀ sen tehokkuuden seurantaan. Uudet teknologiat mahdollistavat rajattoman tiedonkeruun, ja datasta onkin tullut organisaatioiden kilpailuvaltti. TÀmÀ on aiheuttanut kuitenkin sen, ettÀ organisaatioilla on haasteita tunnistaa, mikÀ data on olennaista heidÀn liiketoiminnalleen, ja erityisesti, ettÀ kuinka prosessoida tieto hyödylliseksi informaatioksi ja pÀÀtöksenteon tueksi. Tietoa tarvitaan tehokkaaseen pÀÀtöksentekoon, ja ajantasainen tieto luo organisaatiolle enemmÀn arvoa ja uusia mahdollisuuksia. Erilaisilla BI-työkaluilla eri lÀhteistÀ kerÀtty tieto voidaan integroida ja muuntaa luettaviksi reaaliaikaisiksi raporteiksi, mikÀ tukee johdon pÀÀtöksentekoprosessia. TÀssÀ toimintatutkimuksessa selvitettiin, miten tehokkaan liiketoimintatiedon avulla voidaan automatisoida ja digitalisoida johdon raportointia esimerkkiorganisaatiossa. Tutkimus toteutettiin maailmanlaajuiseen teknologiayritykseen ja pilotoitiin sen yhteen liiketoimintalinjaan. Tutkimuksen tarkoituksena oli löytÀÀ ratkaisuja organisaation jatkuvan parantamisen ja laatukustannusdatan integrointiin johdon raportoinnin nÀkökulmasta sekÀ edistÀÀ muutosta organisaation sisÀllÀ. Tutkimus oli rajoitettu koskemaan vain esimerkkiorganisaation sisÀistÀ liiketoimintatietoa.Tutkimuksen aikana luotiin kÀyttÀjÀystÀvÀllinen, nÀkyvÀ ja online-tietoon perustuva raportointijÀrjestelmÀ. Automatisoitu ja digitalisoitu prosessi hyödyntÀÀ BI-työkalua, Microsoft Power BI:ta, ja mahdollistaa prosessoitujen laatukustannustietojen saamisen ulos yhdestÀ kanavasta. Tutkimus osoitti, ettÀ vaikka kÀytössÀ olisikin edistyksellisiÀ ja tehokkaita BI-työkaluja, se ei poista datan laadun merkitystÀ. Jopa kaikkein tehokkaimmat raportointityökalut ovat turhia, jos niiden hyödyntÀmÀ tieto ei ole korkealaatuista. TÀmÀn tutkimuksen mukaan huonolaatuinen data ei tuota vain epÀluotettavia raportteja, vaan on myös merkittÀvÀ este tietojen integroinnille. Jos data on eri muodoissa useissa eri jÀrjestelmissÀ, sitÀ on mahdotonta integroida. Jotta datan laatu voidaan taata ja tietoa voidaan hyödyntÀÀ tehokkaasti organisaation tarpeisiin, on dataa sÀÀnnöllisesti yllÀpidettÀvÀ ja puhdistettava. Datan laatu on ensisijaisesti liiketoimintakysymys. TÀmÀn tutkimuksen mukaan datan visualisointi voi olla arvokas työkalu laadunvalvonnassa, koska se paljastaa ongelmat tiedossa ja tiedonkeruussa vÀlittömÀsti, samoin kuin myös virheet IT-jÀrjestelmÀssÀ.Business Intelligence (BI) is a process, which refers all the essential information, which organizations systematically gathers and analyse to aid more accurate decision-making. Target of the Business Intelligence is to collect, analyze, process, store and deliver the real-time information for an organizationŽs needs, as well as, follow the effectiveness. New technologies enable limitless data collection and data has become valuable asset for organizations. However, another problem occurred as organizations have challenges to identify, which data is relevant for their business and operations, and more importantly, how to process the data to useful information and a source of effective decision making. Data is needed for efficient decision making and up-to-date data brings more value and new opportunities for the organization. With BI tools, data can be integrated and transformed to readable reports, which supports management decision making process. In this action research we were studying how the management reporting can be automatized and digitalized with efficient Business Intelligence in a case organization. With this research we were trying to find solutions for a case organizationŽs problematic practices and processes in continuous improvement (CI) and cost of poor quality (COPQ) data integration from a management reporting perspective and promoting change within the organization. The research was limited to concern only with the case organizationŽs internal Business Intelligence and internal data. During the research, a user friendly, visible and on-line data-based reporting procedure was created. The generated automatized and digitalized process benefits the BI tool, Microsoft Power BI, and enables one to retrieve processed COPQ information out from one channel. The research showed that even though there are progressive and effective BI tools in use, it does not remove the meaning of data quality. Even the most powerful reporting tools are useless, if the data is not high-quality and clean. According to this research, bad data is not just producing unreliable reports, but it is also obstacle for data integration. If the data is in various formats in several systems, it is impossible to integrate. To be able guarantee the data quality and to be able to exploit the data effectively, it requires maintaining and cleaning the data regularly. Data quality is primarily a business issue. According to this study, data visualization can be a valuable tool in quality control as it reveals errors in data and data collection immediately, as well as, errors in the IT-system

    The challenges of extract, transform and load (ETL) for data integration in near real-time environment

    Get PDF
    Organization with considerable investment into data warehousing, the influx of various data types and forms require certain ways of prepping data and staging platform that support fast, efficient and volatile data to reach its targeted audiences or users of different business needs. Extract, Transform and Load (ETL) system proved to be a choice standard for managing and sustaining the movement and transactional process of the valued big data assets. However, traditional ETL system can no longer accommodate and effectively handle streaming or near real-time data and stimulating environment which demands high availability, low latency and horizontal scalability features for functionality. This paper identifies the challenges of implementing ETL system for streaming or near real-time data which needs to evolve and streamline itself with the different requirements. Current efforts and solution approaches to address the challenges are presented. The classification of ETL system challenges are prepared based on near real-time environment features and ETL stages to encourage different perspectives for future research

    On-site customer analytics and reporting (OSCAR):a portable clinical data warehouse for the in-house linking of hospital and telehealth data

    Get PDF
    This document conveys the results of the On-Site Customer Analytics and Reporting (OSCAR) project. This nine-month project started on January 2014 and was conducted at Philips Research in the Chronic Disease Management group as part of the H2H Analytics Project. Philips has access to telehealth data from their Philips Motiva tele-monitoring and other services. Previous projects within Philips Re-search provided a data warehouse for Motiva data and a proof-of-concept (DACTyL) solution that demonstrated the linking of hospital and Motiva data and subsequent reporting. Severe limitations with the DACTyL solution resulted in the initiation of OSCAR. A very important one was the unwillingness of hospitals to share personal patient data outside their premises due to stringent privacy policies, while at the same time patient personal data is required in order to link the hospital data with the Motiva data. Equally important is the fact that DACTyL considered the use of only Motiva as a telehealth source and only a single input interface for the hospitals. OSCAR was initiated to propose a suitable architecture and develop a prototype solution, in contrast to the proof-of-concept DACTyL, with the twofold aim to overcome the limitations of DACTyL in order to be deployed in a real-life hospital environment and to expand the scope to an extensible solution that can be used in the future for multiple telehealth services and multiple hospital environments. In the course of the project, a software solution was designed and consequently deployed in the form of a virtual machine. The solution implements a data warehouse that links and hosts the collected hospital and telehealth data. Hospital data are collected with the use of a modular service oriented data collection component by exposing web services described in WSDL that accept configurable XML data messages. ETL processes propagate the data, link, and load it on the OS-CAR data warehouse. Automated reporting is achieved using dash-boards that provide insight into the data stored in the data warehouse. Furthermore, the linked data is available for export to Philips Re-search in de-identified format

    Comparative Study Of Implementing The On-Premises and Cloud Business Intelligence On Business Problems In a Multi-National Software Development Company

    Get PDF
    Internship Report presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business IntelligenceNowadays every enterprise wants to be competitive. In the last decade, the data volumes are increased dramatically. As each year data in the market increases, the ability to extract, analyze and manage the data become the backbone condition for the organization to be competitive. In this condition, organizations need to adapt their technologies to the new business reality in order to be competitive and provide new solutions that meet new requests. Business Intelligence by the main definition is the ability to extract analyze and manage the data through which an organization gain a competitive advantage. Before using this approach, it’s important to decide on which computing system it will base on, considering the volume of data, business context of the organization and technologies requirements of the market. In the last 10 years, the popularity of cloud computing increased and divided the computing Systems into On-Premises and cloud. The cloud benefits are based on providing scalability, availability and fewer costs. On another hand, traditional On-Premises provides independence of software configuration, control over data and high security. The final decision as to which computing paradigm to follow in the organization it’s not an easy task as well as depends on the business context of the organization, and the characteristics of the performance of the current On-Premises systems in business processes. In this case, Business Intelligence functions and requires in-depth analysis in order to understand if cloud computing technologies could better perform in those processes than traditional systems. The objective of this internship is to conduct a comparative study between 2 computing systems in Business Intelligence routine functions. The study will compare the On-Premises Business Intelligence Based on Oracle Architecture with Cloud Business Intelligence based on Google Cloud Services. A comparative study will be conducted through participation in activities and projects in the Business Intelligence department, of a company that develops software digital solutions to serve the telecommunications market for 12 months, as an internship student in the 2nd year of a master’s degree in Information Management, with a specialization in Knowledge Management and Business Intelligence at Nova Information Management School (NOVA IMS)

    Extract, Transform, and Load data from Legacy Systems to Azure Cloud

    Get PDF
    Internship report presented as partial requirement for obtaining the Master’s degree in Information Management, with a specialization in Knowledge Management and Business IntelligenceIn a world with continuously evolving technologies and hardened competitive markets, organisations need to continually be on guard to grasp cutting edge technology and tools that will help them to surpass any competition that arises. Modern data platforms that incorporate cloud technologies, support organisations to strive and get ahead of their competitors by providing solutions that help them capture and optimally use untapped data, and scalable storages to adapt to ever-growing data quantities. Also, adopt data processing and visualisation tools that help to improve the decision-making process. With many cloud providers available in the market, from small players to major technology corporations, this offers much flexibility to organisations to choose the best cloud technology that will align with their use cases and overall products and services strategy. This internship came up at the time when one of Accenture’s significant client in the financial industry decided to migrate from legacy systems to a cloud-based data infrastructure that is Microsoft Azure cloud. During this internship, development of the data lake, which is a core part of the MDP, was done to understand better the type of challenges that can be faced when migrating data from on-premise legacy systems to a cloud-based infrastructure. Also, provided in this work, are the main recommendations and guidelines when it comes to performing a large scale data migration
    • 

    corecore