4,631 research outputs found

    Increasing the Efficiency of Rule-Based Expert Systems Applied on Heterogeneous Data Sources

    Get PDF
    Nowadays, the proliferation of heterogeneous data sources provided by different research and innovation projects and initiatives is proliferating more and more and presents huge opportunities. These developments create an increase in the number of different data sources, which could be involved in the process of decisionmaking for a specific purpose, but this huge heterogeneity makes this task difficult. Traditionally, the expert systems try to integrate all information into a main database, but, sometimes, this information is not easily available, or its integration with other databases is very problematic. In this case, it is essential to establish procedures that make a metadata distributed integration for them. This process provides a “mapping” of available information, but it is only at logic level. Thus, on a physical level, the data is still distributed into several resources. In this sense, this chapter proposes a distributed rule engine extension (DREE) based on edge computing that makes an integration of metadata provided by different heterogeneous data sources, applying then a mathematical decomposition over the antecedent of rules. The use of the proposed rule engine increases the efficiency and the capability of rule-based expert systems, providing the possibility of applying these rules over distributed and heterogeneous data sources, increasing the size of data sets that could be involved in the decision-making process

    Multi-objective scheduling for real-time data warehouses

    Get PDF
    The issue of write-read contention is one of the most prevalent problems when deploying real-time data warehouses. With increasing load, updates are increasingly delayed and previously fast queries tend to be slowed down considerably. However, depending on the user requirements, we can improve the response time or the data quality by scheduling the queries and updates appropriately. If both criteria are to be considered simultaneously, we are faced with a so-called multi-objective optimization problem. We transformed this problem into a knapsack problem with additional inequalities and solved it efficiently. Based on our solution, we developed a scheduling approach that provides the optimal schedule with regard to the user requirements at any given point in time. We evaluated our scheduling in an extensive experimental study, where we compared our approach with the respective optimal schedule policies of each single optimization objective

    Progressive Analytics: A Computation Paradigm for Exploratory Data Analysis

    Get PDF
    Exploring data requires a fast feedback loop from the analyst to the system, with a latency below about 10 seconds because of human cognitive limitations. When data becomes large or analysis becomes complex, sequential computations can no longer be completed in a few seconds and data exploration is severely hampered. This article describes a novel computation paradigm called Progressive Computation for Data Analysis or more concisely Progressive Analytics, that brings at the programming language level a low-latency guarantee by performing computations in a progressive fashion. Moving this progressive computation at the language level relieves the programmer of exploratory data analysis systems from implementing the whole analytics pipeline in a progressive way from scratch, streamlining the implementation of scalable exploratory data analysis systems. This article describes the new paradigm through a prototype implementation called ProgressiVis, and explains the requirements it implies through examples.Comment: 10 page

    Autonomic Database Management: State of the Art and Future Trends

    Get PDF
    In recent years, Database Management Systems (DBMS) have increased significantly in size and complexity, increasing the extent to which database administration is a time-consuming and expensive task. Database Administrator (DBA) expenses have become a significant part of the total cost of ownership. This results in the need to develop Autonomous Database Management systems (ADBMS) that would manage themselves without human intervention. Accordingly, this paper evaluates the current state of autonomous database systems and identifies gaps and challenges in the achievement of fully autonomic databases. In addition to highlighting technical challenges and gaps, we identify one human factor, gaining the trust of DBAs, as a major obstacle. Without human acceptance and trust, the goal of achieving fully autonomic databases cannot be realized
    corecore