9 research outputs found

    Una plataforma basada en metadata para cálculo de vistas en sistemas de información multi-fuentes

    Get PDF
    Un Sistema de Información Multi- Fuente (MSIS) se compone de un conjunto de fuentes de datos independientes y un conjunto de vistas o consultas que definen los requerimientos de los usuarios. Sus diferencias con los sistemas de información clásicos introducen nuevas actividades de diseño y motiva el desarrollo de nuevas técnicas. En este artículo estudiamos un caso particular de un MSIS: un Data Warehouse (DW) y proponemos un meta-modelo para representar su metadata desde dos puntos de vistas: la representación de los esquemas y las relaciones inter-esquema que permiten calcular una vista a partir de los datos fuentes. El meta-modelo es el centro de una plataforma general para desarrollo de MSIS. La plataforma permite la fácil integración de herramientas de diseño y mantenimiento a través de un modelo de datos común que centraliza el flujo de datos y las rutinas de control de integridad entre las herramientas.Eje: Bases de DatosRed de Universidades con Carreras en Informática (RedUNCI

    Manejo de cambios en la calidad de las fuentes en sistemas de integración de datos

    Get PDF
    Los Sistemas de Integración de Datos (DIS) integran información desde un conjunto de Fuentes de Datos heterogéneas y autónomas, y proveen dicha información a un conjunto de Vistas de Usuario. Consideramos un sistema donde se toman en cuenta las propiedades de calidad. En las fuentes existen los valores reales de las propiedades de calidad y en el sistema integrado existen los valores requeridos de estas propiedades. En este tipo de sistema, considerando la gran cantidad posible de fuentes y su autonomía, aparece un nuevo problema: los cambios en la calidad de las fuentes. Los valores reales de los elementos de las fuentes pueden cambiar con mucha frecuencia y de forma impredecible. Nos interesan las consecuencias que pueden tener los cambios en la calidad de las fuentes sobre la calidad global del sistema, e incluso sobre el esquema del DIS y la forma de procesar su información. Analizamos estas consecuencias basándonos en las diferentes posibilidades existentes para manejar los cambios en los esquemas de las fuentes en sistemas de este tipo. Además estudiamos dos propiedades en particular; frescura y precisión, y definimos estrategias para el manejo de los cambios en estas propiedades

    Data Model Pattern for Data Warehouse Web Application of Information Portal (Case Study: Hidyatullah Integrated Islamic Boarding School, Banyuasin Regency)

    Get PDF
    Data warehouse is a collection of data that is subject-oriented, integrated, timevariant, and non-volatile which can be used to produce useful information for management decision making. In an information system, there are a lot of information that accommodated by both internal and external parties. Over time the amount of information has increased. For that we need a way to accommodate a lot of data in a data warehouse. One way to produce a good data model is to use the data model pattern. In this study, a data model pattern will be applied to the web application of information portal in the Hidayatullah Islamic boarding school in Banyuasin Regency. This software was taken as research material because the business process is quite varied and can describe the activities of all academicians in the Hidayatullah integrated Islamic boarding school, Banyuasin district. With well-structured data, the distribution of information to the general public will be faster and more accurate

    Model of maintenance of business applications in the conditions of changing business environment

    Get PDF
    Odgovor na promjene poslovnog okruženja zahtjeva modifikaciju i dodavanje novih funkcionalnih komponenti unutar poslovnih aplikacija. Prioritet održavanja je zadržavanja integriteta podatka i strukture svih validiranih izvještaja. Izmjenom arhitekture sustava generirane karakteristike poslovne aplikacije moraju sadržavati funkcionalne zahtjeve korisnika i zahtjeve promjene regulatora višeg informacijskog sustava na koji je niži informacijski sustav oslonjen. Opisani model održavanja poslovnih aplikacija definiran je funkcijom prijelaza iz početnog u konačno stanje automata uz uvjet zadržavanja konzistentnosti podataka i izvještaja uvođenjem vremenske komponente za dohvat podataka i izvještaja. Funkcija prijelaza primjenom vremenskom komponentom rezultira usmjerenim grafom stanja koji se sastoji od vremenskih čvorova koji prikazuju stanje automata, a završno stanje prikazuje stanje konačnog automata. U radu se prikazuje stanje automata dobivenog primjenom vremenske komponente modela održavanja poslovnih aplikacija u uvjetima izmjene Zakonske fiskalne politike, intervencijom na Poreznu politiku dodavanje nove porezne stope.Responding to changes in the business environment requires modification and addition of new functional components within business applications. The maintenance priority is maintaining the integrity of the data and the structure of all validated reports. By changing the system architecture, the generated characteristics of the business application must contain the functional requirements of the user and the requirements of changing the regulator of the higher information system on which the lower information system is based. The described business application maintenance model is defined by the function of transition from the initial to the final state of the automaton with the condition of maintaining the consistency of data and reports by introducing a time component for retrieving data and reports. The transition function applied with a time component results in a directed state graph consisting of time nodes representing the state of the automaton, and the final state representing the state of the final automaton. The paper shows the state of the automaton obtained by applying the time component of the business application maintenance model in the conditions of changing the Legal Fiscal Policy, adding a new tax rate through the intervention of the Tax Policy

    Formal design of data warehouse and OLAP systems : a dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Systems at Massey University, Palmerston North, New Zealand

    Get PDF
    A data warehouse is a single data store, where data from multiple data sources is integrated for online business analytical processing (OLAP) of an entire organisation. The rationale being single and integrated is to ensure a consistent view of the organisational business performance independent from different angels of business perspectives. Due to its wide coverage of subjects, data warehouse design is a highly complex, lengthy and error-prone process. Furthermore, the business analytical tasks change over time, which results in changes in the requirements for the OLAP systems. Thus, data warehouse and OLAP systems are rather dynamic and the design process is continuous. In this thesis, we propose a method that is integrated, formal and application-tailored to overcome the complexity problem, deal with the system dynamics, improve the quality of the system and the chance of success. Our method comprises three important parts: the general ASMs method with types, the application tailored design framework for data warehouse and OLAP, and the schema integration method with a set of provably correct refinement rules. By using the ASM method, we are able to model both data and operations in a uniform conceptual framework, which enables us to design an integrated approach for data warehouse and OLAP design. The freedom given by the ASM method allows us to model the system at an abstract level that is easy to understand for both users and designers. More specifically, the language allows us to use the terms from the user domain not biased by the terms used in computer systems. The pseudo-code like transition rules, which gives the simplest form of operational semantics in ASMs, give the closeness to programming languages for designers to understand. Furthermore, these rules are rooted in mathematics to assist in improving the quality of the system design. By extending the ASMs with types, the modelling language is tailored for data warehouse with the terms that are well developed for data-intensive applications, which makes it easy to model the schema evolution as refinements in the dynamic data warehouse design. By providing the application-tailored design framework, we break down the design complexity by business processes (also called subjects in data warehousing) and design concerns. By designing the data warehouse by subjects, our method resembles Kimball's "bottom-up" approach. However, with the schema integration method, our method resolves the stovepipe issue of the approach. By building up a data warehouse iteratively in an integrated framework, our method not only results in an integrated data warehouse, but also resolves the issues of complexity and delayed ROI (Return On Investment) in Inmon's "top-down" approach. By dealing with the user change requests in the same way as new subjects, and modelling data and operations explicitly in a three-tier architecture, namely the data sources, the data warehouse and the OLAP (online Analytical Processing), our method facilitates dynamic design with system integrity. By introducing a notion of refinement specific to schema evolution, namely schema refinement, for capturing the notion of schema dominance in schema integration, we are able to build a set of correctness-proven refinement rules. By providing the set of refinement rules, we simplify the designers's work in correctness design verification. Nevertheless, we do not aim for a complete set due to the fact that there are many different ways for schema integration, and neither a prescribed way of integration to allow designer favored design. Furthermore, given its °exibility in the process, our method can be extended for new emerging design issues easily

    An ambient agent model for reading companion robot

    Get PDF
    Reading is essentially a problem-solving task. Based on what is read, like problem solving, it requires effort, planning, self-monitoring, strategy selection, and reflection. Also, as readers are trying to solve difficult problems, reading materials become more complex, thus demands more effort and challenges cognition. To address this issue, companion robots can be deployed to assist readers in solving difficult reading tasks by making reading process more enjoyable and meaningful. These robots require an ambient agent model, monitoring of a reader’s cognitive demand as it could consist of more complex tasks and dynamic interactions between human and environment. Current cognitive load models are not developed in a form to have reasoning qualities and not integrated into companion robots. Thus, this study has been conducted to develop an ambient agent model of cognitive load and reading performance to be integrated into a reading companion robot. The research activities were based on Design Science Research Process, Agent-Based Modelling, and Ambient Agent Framework. The proposed model was evaluated through a series of verification and validation approaches. The verification process includes equilibria evaluation and automated trace analysis approaches to ensure the model exhibits realistic behaviours and in accordance to related empirical data and literature. On the other hand, validation process that involved human experiment proved that a reading companion robot was able to reduce cognitive load during demanding reading tasks. Moreover, experiments results indicated that the integration of an ambient agent model into a reading companion robot enabled the robot to be perceived as a social, intelligent, useful, and motivational digital side-kick. The study contribution makes it feasible for new endeavours that aim at designing ambient applications based on human’s physical and cognitive process as an ambient agent model of cognitive load and reading performance was developed. Furthermore, it also helps in designing more realistic reading companion robots in the future

    METADATA REPOSITORY MODEL FOR DATA WAREHOUSE SCHEMA EVOLUTION AND INTEGRATION WITH MASTER DATA MANAGEMENT SYSTEM

    Get PDF
    Skladište podataka (SP) se u današnje vrijeme nalazi u iznimno dinamičnom poslovnom okruženju. S jedne strane imamo brojne (heterogene) izvore podataka koji su podložni čestim promjenama podataka i strukture, dok s druge strane imamo brojne promjene u informacijskim zahtjevima koje postavljaju poslovni korisnici. Višedimenzionalna shema (VDS) u svakom trenutku mora moći usvojiti promjene iz izvora podataka te im se prilagoditi, kao i zadovoljiti korisničke zahtjeve za informacijama, što je iznimno složen zadatak. Problem koji se istražuje kod evolucije skladišta podataka jest pamćenje promjena opsega te strukture podataka i meta-podataka, u dužem vremenskom periodu. Akademska zajednica je do danas napravila određene korake prema rješavanju ovoga problema, no uvijek ima prostora za poboljšanje postojećih istraživanja, kao i za osmišljanje novih rješenja. Cilj ovog doktorskog istraživanja bio je razviti model repozitorija meta-podataka (MDV) koji se zasniva na Data Vault (DV) metodi za modeliranje baza podataka. Ovako definiran repozitorij meta-podataka služi za integraciju skladišta podataka (SP) i sustava za upravljanje matičnim podacima (UMP) te za praćenje i upravljanje promjenama u SP/UMP podacima i meta-podacima, kao i u njihovim shemama. Na ovaj način evolucija sheme skladišta podataka provodi se isključivo uz proširenje postojeće sheme i bez gubitka informacija. Također, složenost provedbe evolucije SP/UMP sheme je smanjena u odnosu na tradicionalne pristupe zasnovane na relacijskom modelu, a repozitorij zasnovan na MDV modelu služi kao proširenje tradicionalnog relacijskog sistemskog kataloga. U svrhu izgradnje praktičnog prototipa i testiranja predloženog rješenja razvijen je trajni i obuhvatni model repozitorija meta-podataka za integraciju i praćenje promjena shema SP i UMP, sistematiziran je formalni konačni skup osnovnih promjena nad shemom SP i UMP, definirana je formalna algebra za održavanje SP i UMP sheme te je razvijena arhitektura integriranog SP i UMP. Na kraju je razvijen i sam praktični prototip koji služi za empirijsku verifikaciju predloženog rješenja.Data Warehouse (DW) environment nowadays is an extremely dynamic one. On the one hand we have a number of (heterogeneous) data sources that are subject to frequent changes of data and structure, while on the other hand we have the frequent changes in the information requirements (set by business users). DW has an extremely complex task it must at all times be able to adapt to changes from data sources as well as to satisfy user's requests for information. This problem that we explore here is known and recognized in literature as a DW evolution problem tracking and storing the scope and structure changes of data and metadata for a very long time period. The academic community has taken some steps towards solving this problem but there is always some room for an improvement of the existing research, as well as for a development of new solutions. The goal of this doctoral thesis was to develop a metadata repository model (MDV) which is based on the Data Vault (DV) method for database modeling. Thus defined metadata repository model is used for integrating a data warehouse (DW) system and a master data management (MDM) system and for tracking and managing changes in the DW/MDM data and metadata, as well as in their schemas. In this way, a DW schema evolution is carried out only with the expansion of the existing schema and without loss of information. Also, the complexity of the DW schema evolution implementation is decreased compared to traditional approaches based on the relational model. Additionally, MDV repository serves as an extension of traditional relational database system catalog. In order to build a practical prototype and to test the proposed solution, a permanent and comprehensive metadata repository model for integration and tracking of DW/MDM data and schema changes was developed, a final set of fundamental changes over the DW/MDM schema was systematized, a formal algebra for DW/MDM schema maintenance was develped, an architecture of integrated DW/MDM was proposed, and a prototype of our dual DW/MDM solution was developed and empirically verifie

    MLED_BI: A Novel Business Intelligence Design Approach to Support Multilingualism

    Get PDF
    With emerging markets and expanding international cooperation, there is a requirement to support Business Intelligence (BI) applications in multiple languages, a process which we refer to as Multilingualism (ML). ML in BI is understood in this research as the ability to store descriptive content (such as descriptions of attributes in BI reports) in more than one language at Data Warehousing (DWH) level and to use this information at presentation level to provide reports, queries or dashboards in more than one language. Design strategies for data warehouses are typically based on the assumption of a single language environment. The motivations for this research are the design and performance challenges encountered when implementing ML in a BI data warehouse environment. These include design issues, slow response times, delays in updating reports and changing languages between reports, the complexity of amending existing reports and the performance overhead. The literature review identified that the underlying cause of these problems is that existing approaches used to enable ML in BI are primarily ad-hoc workarounds which introduce dependency between elements and lead to excessive redundancy. From the literature review, it was concluded that a satisfactory solution to the challenge of ML in BI requires a design approach based on data independence the concept of immunity from changes and that such a solution does not currently exist. This thesis presents MLED_BI (Multilingual Enabled Design for Business Intelligence). MLED_BI is a novel design approach which supports data independence and immunity from changes in the design of ML data warehouses and BI systems. MLED_BI extends existing data warehouse design approaches by revising the role of the star schema and introducing a ML design layer to support the separation of language elements. This also facilitates ML at presentation level by enabling the use of a ML content management system. Compared to existing workarounds for ML, the MLED_BI design approach has a theoretical underpinning which allows languages to be added, amended and deleted without requiring a redesign of the star schema; provides support for the manipulation of ML content; improves performance and streamlines data warehouse operations such as ETL (Extract, Transform, Load). Minor contributions include the development of a novel BI framework to address the limitations of existing BI frameworks and the development of a tool to evaluate changes to BI reporting solutions. The MLED_BI design approach was developed based on the literature review and a mixed methods approach was used for validation. Technical elements were validated experimentally using performance metrics while end user acceptance was validated qualitatively with end users and technical users from a number of countries, reflecting the ML basis of the research. MLED_BI requires more resources at design and initial implementation stage than existing ML workarounds but this is outweighed by improved performance and by the much greater flexibility in ML made possible by the data independence approach of MLED_BI. The MLED_BI design approach enhances existing BI design approaches for use in ML environments

    A Logical Model for Data Warehouse Design and Evolution

    No full text
    corecore