284 research outputs found

    Formal design of data warehouse and OLAP systems : a dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Systems at Massey University, Palmerston North, New Zealand

    Get PDF
    A data warehouse is a single data store, where data from multiple data sources is integrated for online business analytical processing (OLAP) of an entire organisation. The rationale being single and integrated is to ensure a consistent view of the organisational business performance independent from different angels of business perspectives. Due to its wide coverage of subjects, data warehouse design is a highly complex, lengthy and error-prone process. Furthermore, the business analytical tasks change over time, which results in changes in the requirements for the OLAP systems. Thus, data warehouse and OLAP systems are rather dynamic and the design process is continuous. In this thesis, we propose a method that is integrated, formal and application-tailored to overcome the complexity problem, deal with the system dynamics, improve the quality of the system and the chance of success. Our method comprises three important parts: the general ASMs method with types, the application tailored design framework for data warehouse and OLAP, and the schema integration method with a set of provably correct refinement rules. By using the ASM method, we are able to model both data and operations in a uniform conceptual framework, which enables us to design an integrated approach for data warehouse and OLAP design. The freedom given by the ASM method allows us to model the system at an abstract level that is easy to understand for both users and designers. More specifically, the language allows us to use the terms from the user domain not biased by the terms used in computer systems. The pseudo-code like transition rules, which gives the simplest form of operational semantics in ASMs, give the closeness to programming languages for designers to understand. Furthermore, these rules are rooted in mathematics to assist in improving the quality of the system design. By extending the ASMs with types, the modelling language is tailored for data warehouse with the terms that are well developed for data-intensive applications, which makes it easy to model the schema evolution as refinements in the dynamic data warehouse design. By providing the application-tailored design framework, we break down the design complexity by business processes (also called subjects in data warehousing) and design concerns. By designing the data warehouse by subjects, our method resembles Kimball's "bottom-up" approach. However, with the schema integration method, our method resolves the stovepipe issue of the approach. By building up a data warehouse iteratively in an integrated framework, our method not only results in an integrated data warehouse, but also resolves the issues of complexity and delayed ROI (Return On Investment) in Inmon's "top-down" approach. By dealing with the user change requests in the same way as new subjects, and modelling data and operations explicitly in a three-tier architecture, namely the data sources, the data warehouse and the OLAP (online Analytical Processing), our method facilitates dynamic design with system integrity. By introducing a notion of refinement specific to schema evolution, namely schema refinement, for capturing the notion of schema dominance in schema integration, we are able to build a set of correctness-proven refinement rules. By providing the set of refinement rules, we simplify the designers's work in correctness design verification. Nevertheless, we do not aim for a complete set due to the fact that there are many different ways for schema integration, and neither a prescribed way of integration to allow designer favored design. Furthermore, given its °exibility in the process, our method can be extended for new emerging design issues easily

    Data warehouse stream view update with hash filter.

    Get PDF
    A data warehouse usually contains large amounts of information representing an integration of base data from one or more external data sources over a long period of time to provide fast-query response time. It stores materialized views which provide aggregation (SUM, MIX, MIN, COUNT and AVG) on some measure attributes of interest for data warehouse users. The process of updating materialized views in response to the modification of the base data is called materialized view maintenance. Some data warehouse application domains, like stock markets, credit cards, automated banking and web log domains depend on data sources updated as continuous streams of data. In particular, electronic stock trading markets such as the NASDAQ, generate large volumes of data, in bursts that are up to 4,200 messages per second. This thesis proposes a new view maintenance algorithm (StreamVup), which improves on semi join methods by using hash filters. The new algorithm first, reduce the amount of bytes transported through the network for streams tuples, and secondly reduces the cost of join operations during view update by eliminating the recompution of view updates caused by newly arriving duplicate tuples. (Abstract shortened by UMI.)Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2003 .I85. Source: Masters Abstracts International, Volume: 42-05, page: 1753. Adviser: C. I. Ezeife. Thesis (M.Sc.)--University of Windsor (Canada), 2003

    Data warehouse stream view update with multiple streaming.

    Get PDF
    The main objective of data warehousing is to store information representing an integration of base data from single or multiple data sources over an extended period of time. To provide fast access to the data, regardless of the availability of the data source, data warehouses often use materialized views. Materialized views are able to provide aggregation on some attributes to help Decision Support Systems. Updating materialized views in response to modifications in the base data is called materialized view maintenance. In some applications, for example, the stock market and banking systems, the source data is updated so frequently that we can consider them as a continuous stream of data. To keep the materialized view updated with respect to changes in the base tables in a traditional way will cause query response times to increase. This thesis proposes a new view maintenance algorithm for multiple streaming which improves semi-join methods and hash filter methods. Our proposed algorithm is able to update a view which joins two base tables where both of the base tables are in the form of data streams (always changing). By using a timestamp, building updategrams in parallel and by optimizing the joining cost between two data sources it can reduce the query response time or execution time significantly.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2005 .A336. Source: Masters Abstracts International, Volume: 44-03, page: 1391. Thesis (M.Sc.)--University of Windsor (Canada), 2005

    Updating Data Warehouses with Temporal Data

    Get PDF
    There has been a growing trend to use temporal data in a data warehouse for making strategic and tactical decisions. The key idea of temporal data management is to make data available at the right time with different time intervals. The temporal data storing enables this by making all the different time slices of data available to whoever needs it. Users with different data latency needs can all be accommodated. Data can be “frozen” via a view on the proper time slice. Data as of a point in time can be obtained across multiple tables or multiple subject areas, resolving consistency and synchronization issues. This paper will discuss implementations such as temporal data updates, coexistence of load and query against the same table, performance of load and report queries, and maintenance of views against the tables with temporal data

    Materializing aaseline views for deviation detection exploratory OLAP

    Get PDF
    The final publication is available at link.springer.comAlert-raising and deviation detection in OLAP and explora-tory search concerns calling the user’s attention to variations and non-uniform data distributions, or directing the user to the most interesting exploration of the data. In this paper, we are interested in the ability of a data warehouse to monitor continuously new data, and to update accordingly a particular type of materialized views recording statistics, called baselines. It should be possible to detect deviations at various levels of aggregation, and baselines should be fully integrated into the database. We propose Multi-level Baseline Materialized Views (BMV), including the mechanisms to build, refresh and detect deviations. We also propose an incremental approach and formula for refreshing baselines efficiently. An experimental setup proves the concept and shows its efficiency.Peer ReviewedPostprint (author's final draft

    Efficient Incremental View Maintenance for Data Warehousing

    Get PDF
    Data warehousing and on-line analytical processing (OLAP) are essential elements for decision support applications. Since most OLAP queries are complex and are often executed over huge volumes of data, the solution in practice is to employ materialized views to improve query performance. One important issue for utilizing materialized views is to maintain the view consistency upon source changes. However, most prior work focused on simple SQL views with distributive aggregate functions, such as SUM and COUNT. This dissertation proposes to consider broader types of views than previous work. First, we study views with complex aggregate functions such as variance and regression. Such statistical functions are of great importance in practice. We propose a workarea function model and design a generic framework to tackle incremental view maintenance and answering queries using views for such functions. We have implemented this approach in a prototype system of IBM DB2. An extensive performance study shows significant performance gains by our techniques. Second, we consider materialized views with PIVOT and UNPIVOT operators. Such operators are widely used for OLAP applications and for querying sparse datasets. We demonstrate that the efficient maintenance of views with PIVOT and UNPIVOT operators requires more generalized operators, called GPIVOT and GUNPIVOT. We formally define and prove the query rewriting rules and propagation rules for such operators. We also design a novel view maintenance framework for applying these rules to obtain an efficient maintenance plan. Extensive performance evaluations reveal the effectiveness of our techniques. Third, materialized views are often integrated from multiple data sources. Due to source autonomicity and dynamicity, concurrency may occur during view maintenance. We propose a generic concurrency control framework to solve such maintenance anomalies. This solution extends previous work in that it solves the anomalies under both source data and schema changes and thus achieves full source autonomicity. We have implemented this technique in a data warehouse prototype developed at WPI. The extensive performance study shows that our techniques put little extra overhead on existing concurrent data update processing techniques while allowing for this new functionality

    A solution for synchronous incremental maintenance of materialized views based on SQL recursive query

    Get PDF
    Materialized views are excessively stored query execution results in the database. They can be used to partially or completely answer queries which will be further appeared instead of re-executing query from the scratch. There is a large number of published works that address the maintenance, especially incremental update, of materialized views and query rewriting for using those ones. Some of them support materialized views based on recursive query in datalog language. Although most of datalog queries can be transferred into SQL queries and vise versa but it is not the case for recursive queries. Recursive queries in the data log try to find all possible transitive closures. Recursive queries in SQL (Common Table Expression – CTE) return direct links but not transitive closures. In this paper, we propose efficient methods for incremental update of materialized views based on CTE; and then propose an algorithm for generating source codes in C language for any input SQL recursive queries. The synthesized source codes implement our proposed incremental update algorithms according to inserted/deleted/updated record set in the base tables. This paper focuses mainly on the recursive queries whose execution results are directed tree-structured data. The two cases of tree node are considered. In the first case, a child node has only one parent node and in the second case, a child node can have many parent nodes. Those two cases represent the two types of relationships between entities in real world, that are one–to–many and many–to–many, respectively. For the one–to–many relationships, the relationship data is accompanied with the records describing the child using some fields. Those fields are set as null in deleting a concrete relationship. For the many–to–many relationships, it is stored in a separate table and the concrete relationships are removed by deleting describing records from that table. Considering of enforcing referential integrity may help to reduce the searching space and therefore, help to improve the performance. However, the set of tree nodes or tree edges can be manipulated. All those combinations lead to different algorithms. The experimental results are provided and discussed to confirm the effectiveness of our proposed method

    A solution for synchronous incremental maintenance of materialized views based on SQL recursive query

    Get PDF
    Materialized views are excessively stored query execution results in the database. They can be used to partially or completely answer queries which will be further appeared instead of re-executing query from the scratch. There is a large number of published works that address the maintenance, especially incremental update, of materialized views and query rewriting for using those ones. Some of them support materialized views based on recursive query in datalog language. Although most of datalog queries can be transferred into SQL queries and vise versa but it is not the case for recursive queries. Recursive queries in the data log try to find all possible transitive closures. Recursive queries in SQL (Common Table Expression – CTE) return direct links but not transitive closures. In this paper, we propose efficient methods for incremental update of materialized views based on CTE; and then propose an algorithm for generating source codes in C language for any input SQL recursive queries. The synthesized source codes implement our proposed incremental update algorithms according to inserted/deleted/updated record set in the base tables. This paper focuses mainly on the recursive queries whose execution results are directed tree-structured data. The two cases of tree node are considered. In the first case, a child node has only one parent node and in the second case, a child node can have many parent nodes. Those two cases represent the two types of relationships between entities in real world, that are one–to–many and many–to–many, respectively. For the one–to–many relationships, the relationship data is accompanied with the records describing the child using some fields. Those fields are set as null in deleting a concrete relationship. For the many–to–many relationships, it is stored in a separate table and the concrete relationships are removed by deleting describing records from that table. Considering of enforcing referential integrity may help to reduce the searching space and therefore, help to improve the performance. However, the set of tree nodes or tree edges can be manipulated. All those combinations lead to different algorithms. The experimental results are provided and discussed to confirm the effectiveness of our proposed method
    corecore