41 research outputs found

    Multi-Agent System for Decision Support in Enterprises

    Get PDF
    Business decisions must rely not only on organisation’s internal data but also on external data from competitors or relevant events. This information can be obtained from the Web but must be integrated with the data in an organisation’s Data Warehouse (DW). In this paper we discuss the agent-based integration approach using ontologies. To enable common understanding of a domain between people and application systems we introduce business rules approach towards ontology management. Because knowledge in organisation’s ontologies is acquired from business users without technical knowledge simple user interface based on ontology restrictions and predefined templates are used. After data from internal DW, Web and business rules are acquired; agent can deduce new knowledge and therefore facilitate decision making process. Tasks like information retrieval from competitors, creating and reviewing OLAP reports are autonomously performed by agents, while business users have control over their execution through knowledge base in ontology. The approach presented in the paper was verified on the case study from the domain of mobile communications with the emphasis on supply and demand of mobile phones and its accessories

    Multi-Agent System for Decision Support in Enterprises

    Get PDF
    Business decisions must rely not only on organisation’s internal data but also on external data from competitors or relevant events. This information can be obtained from the Web but must be integrated with the data in an organisation’s Data Warehouse (DW). In this paper we discuss the agent-based integration approach using ontologies. To enable common understanding of a domain between people and application systems we introduce business rules approach towards ontology management. Because knowledge in organisation’s ontologies is acquired from business users without technical knowledge simple user interface based on ontology restrictions and predefined templates are used. After data from internal DW, Web and business rules are acquired; agent can deduce new knowledge and therefore facilitate decision making process. Tasks like information retrieval from competitors, creating and reviewing OLAP reports are autonomously performed by agents, while business users have control over their execution through knowledge base in ontology. The approach presented in the paper was verified on the case study from the domain of mobile communications with the emphasis on supply and demand of mobile phones and its accessories

    A unified view of data-intensive flows in business intelligence systems : a survey

    Get PDF
    Data-intensive flows are central processes in today’s business intelligence (BI) systems, deploying different technologies to deliver data, from a multitude of data sources, in user-preferred and analysis-ready formats. To meet complex requirements of next generation BI systems, we often need an effective combination of the traditionally batched extract-transform-load (ETL) processes that populate a data warehouse (DW) from integrated data sources, and more real-time and operational data flows that integrate source data at runtime. Both academia and industry thus must have a clear understanding of the foundations of data-intensive flows and the challenges of moving towards next generation BI environments. In this paper we present a survey of today’s research on data-intensive flows and the related fundamental fields of database theory. The study is based on a proposed set of dimensions describing the important challenges of data-intensive flows in the next generation BI setting. As a result of this survey, we envision an architecture of a system for managing the lifecycle of data-intensive flows. The results further provide a comprehensive understanding of data-intensive flows, recognizing challenges that still are to be addressed, and how the current solutions can be applied for addressing these challenges.Peer ReviewedPostprint (author's final draft

    Incorporation of ontologies in data warehouse/business intelligence systems - A systematic literature review

    Get PDF
    Semantic Web (SW) techniques, such as ontologies, are used in Information Systems (IS) to cope with the growing need for sharing and reusing data and knowledge in various research areas. Despite the increasing emphasis on unstructured data analysis in IS, structured data and its analysis remain critical for organizational performance management. This systematic literature review aims at analyzing the incorporation and impact of ontologies in Data Warehouse/Business Intelligence (DW/BI) systems, contributing to the current literature by providing a classification of works based on the field of each case study, SW techniques used, and the authors’ motivations for using them, with a focus on DW/BI design, development and exploration tasks. A search strategy was developed, including the definition of keywords, inclusion and exclusion criteria, and the selection of search engines. Ontologies are mainly defined using the Ontology Web Language standard to support multiple DW/BI tasks, such as Dimensional Modeling, Requirement Analysis, Extract-Transform-Load, and BI Application Design. Reviewed authors present a variety of motivations for ontology-driven solutions in DW/BI, such as eliminating or solving data heterogeneity/semantics problems, increasing interoperability, facilitating integration, or providing semantic content for requirements and data analysis. Further, implications for practice and research agenda are indicated.info:eu-repo/semantics/publishedVersio

    Data warehouse stream view update with hash filter.

    Get PDF
    A data warehouse usually contains large amounts of information representing an integration of base data from one or more external data sources over a long period of time to provide fast-query response time. It stores materialized views which provide aggregation (SUM, MIX, MIN, COUNT and AVG) on some measure attributes of interest for data warehouse users. The process of updating materialized views in response to the modification of the base data is called materialized view maintenance. Some data warehouse application domains, like stock markets, credit cards, automated banking and web log domains depend on data sources updated as continuous streams of data. In particular, electronic stock trading markets such as the NASDAQ, generate large volumes of data, in bursts that are up to 4,200 messages per second. This thesis proposes a new view maintenance algorithm (StreamVup), which improves on semi join methods by using hash filters. The new algorithm first, reduce the amount of bytes transported through the network for streams tuples, and secondly reduces the cost of join operations during view update by eliminating the recompution of view updates caused by newly arriving duplicate tuples. (Abstract shortened by UMI.)Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2003 .I85. Source: Masters Abstracts International, Volume: 42-05, page: 1753. Adviser: C. I. Ezeife. Thesis (M.Sc.)--University of Windsor (Canada), 2003

    Data warehouse stream view update with multiple streaming.

    Get PDF
    The main objective of data warehousing is to store information representing an integration of base data from single or multiple data sources over an extended period of time. To provide fast access to the data, regardless of the availability of the data source, data warehouses often use materialized views. Materialized views are able to provide aggregation on some attributes to help Decision Support Systems. Updating materialized views in response to modifications in the base data is called materialized view maintenance. In some applications, for example, the stock market and banking systems, the source data is updated so frequently that we can consider them as a continuous stream of data. To keep the materialized view updated with respect to changes in the base tables in a traditional way will cause query response times to increase. This thesis proposes a new view maintenance algorithm for multiple streaming which improves semi-join methods and hash filter methods. Our proposed algorithm is able to update a view which joins two base tables where both of the base tables are in the form of data streams (always changing). By using a timestamp, building updategrams in parallel and by optimizing the joining cost between two data sources it can reduce the query response time or execution time significantly.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2005 .A336. Source: Masters Abstracts International, Volume: 44-03, page: 1391. Thesis (M.Sc.)--University of Windsor (Canada), 2005

    SETL: A programmable semantic extract-transform-load framework for semantic data warehouses

    Get PDF
    In order to create better decisions for business analytics, organizations increasingly use external structured, semi-structured, and unstructured data in addition to the (mostly structured) internal data. Current Extract-Transform-Load (ETL) tools are not suitable for this “open world scenario” because they do not consider semantic issues in the integration processing. Current ETL tools neither support processing semantic data nor create a semantic Data Warehouse (DW), a repository of semantically integrated data. This paper describes our programmable Semantic ETL (SETL) framework. SETL builds on Semantic Web (SW) standards and tools and supports developers by offering a number of powerful modules, classes, and methods for (dimensional and semantic) DW constructs and tasks. Thus it supports semantic data sources in addition to traditional data sources, semantic integration, and creating or publishing a semantic (multidimensional) DW in terms of a knowledge base. A comprehensive experimental evaluation comparing SETL to a solution made with traditional tools (requiring much more hand-coding) on a concrete use case, shows that SETL provides better programmer productivity, knowledge base quality, and performance.Peer ReviewedPostprint (author's final draft
    corecore