12 research outputs found

    Optimized Generation and Maintenance of Materialized View using Adaptive Mechanism

    Get PDF
    Data Warehouse is storage of enormous amount of data gathered from multiple data sources, which is mainly used by managers for analysis purpose. Hence to make this data available in less amount of time is essential. Using Materialize view we can have result of query in less amount of time compared to access the same from base tables. To materialize all of the views is not possible since it requires storage space and maintenance cost. So it is required to select materialized view which minimizes response time of query and cost of maintenance. In this paper, effective approach is suggested for selection and maintenance of materialize view. DOI: 10.17762/ijritcc2321-8169.15050

    ViewDF: a Flexible Framework for Incremental View Maintenance in Stream Data Warehouses

    Get PDF
    Because of the increasing data sizes and demands for low latency in modern data analysis, the traditional data warehousing technologies are greatly pushed beyond their limits. Several stream data warehouse (SDW) systems, which are warehouses that ingest append-only data feeds and support frequent refresh cycles, have been proposed including different methods to improve the responsiveness of the systems. Materialized views are critical in large-scale data warehouses due to their ability to speed up queries. Thus an SDW maintains layers of materialized views. Materialized view maintenance in SDW systems introduces new challenges. However, some of the existing SDW systems do not address the maintenance of views while others employ view maintenance techniques that are not efficient. This thesis presents ViewDF, a flexible framework for incremental maintenance of materialized views in SDW systems that generalizes existing techniques and enables new optimizations for views defined with operators that are common in stream analytics. We give a special view definition (ViewDF) to enhance the traditional way of creating views in SQL by being able to reference any partition of any table. We describe a prototype system based on this idea, which allows users to write ViewDFs directly and can automatically translate a broad class of queries into ViewDFs. Several optimizations are proposed and experiments show that our proposed system can improve view maintenance time by a factor of two or more in practical settings.1 yea

    Mining interesting events on large and dynamic data

    Get PDF
    Nowadays, almost every human interaction produces some form of data. These data are available either to every user, e.g.~images uploaded on Flickr or to users with specific privileges, e.g.~transactions in a bank. The huge amount of these produced data can easily overwhelm humans that try to make sense out of it. The need for methods that will analyse the content of the produced data, identify emerging topics in it and present the topics to the users has emerged. In this work, we focus on emerging topics identification over large and dynamic data. More specifically, we analyse two types of data: data published in social networks like Twitter, Flickr etc.~and structured data stored in relational databases that are updated through continuous insertion queries. In social networks, users post text, images or videos and annotate each of them with a set of tags describing its content. We define sets of co-occurring tags to represent topics and track the correlations of co-occurring tags over time. We split the tags to multiple nodes and make each node responsible of computing the correlations of its assigned tags. We implemented our approach in Storm, a distributed processing engine, and conducted a user study to estimate the quality of our results. In structured data stored in relational databases, top-k group-by queries are defined and an emerging topic is considered to be a change in the top-k results. We maintain the top-k result sets in the presence of updates minimising the interaction with the underlying database. We implemented and experimentally tested our approach.Heutzutage entstehen durch fast jede menschliche Aktion und Interaktion Daten. Fotos werden auf Flickr bereitgestellt, Neuigkeiten über Twitter verbreitet und Kontakte in Linkedin und Facebook verwaltet; neben traditionellen Vorgängen wie Banktransaktionen oder Flugbuchungen, die Änderungen in Datenbanken erzeugen. Solch eine riesige Menge an Daten kann leicht überwältigend sein bei dem Versuch die Essenz dieser Daten zu extrahieren. Neue Methoden werden benötigt, um Inhalt der Daten zu analysieren, neu entstandene Themen zu identifizieren und die so gewonnenen Erkenntnisse dem Benutzer in einer übersichtlichen Art und Weise zu präsentieren. In dieser Arbeit werden Methoden zur Identifikation neuer Themen in großen und dynamischen Datenmengen behandelt. Dabei werden einerseits die veröffentlichten Daten aus sozialen Netzwerken wie Twitter und Flickr und andererseits strukturierte Daten aus relationalen Datenbanken, welche kontinuierlich aktualisiert werden, betrachtet. In sozialen Netzwerken stellen die Benutzer Texte, Bilder oder Videos online und beschreiben diese für andere Nutzer mit Schlagworten, sogenannten Tags. Wir interpretieren Gruppen von zusammen auftretenden Tags als eine Art Thema und verfolgen die Beziehung bzw. Korrelation dieser Tags über einen gewissen Zeitraum. Abrupte Anstiege in der Korrelation werden als Hinweis auf Trends aufgefasst. Die eigentlich Aufgabe, das Zählen von zusammen auftretenden Tags zur Berechnung von Korrelationsmaßen, wird dabei auf eine Vielzahl von Computerknoten verteilt. Die entwickelten Algorithmen wurden in Storm, einem neuartigen verteilten Datenstrommanagementsystem, implementiert und bzgl. Lastbalancierung und anfallender Netzwerklast sorgfältig evaluiert. Durch eine Benutzerstudie wird darüber hinaus gezeigt, dass die Qualität der gewonnenen Trends höher ist als die Qualität der Ergebnisse bestehender Systeme. In strukturierten Daten von relationalen Datenbanksystemen werden Beste-k Ergebnislisten durch Aggregationsanfragen in SQL definiert. Interessant dabei sind eintretende Änderungen in diesen Listen, was als Ereignisse (Trends) aufgefasst wird. In dieser Arbeit werden Methoden präsentiert diese Ergebnislisten möglichst effizient instand zu halten, um Interaktionen mit der eigentlichen Datenbank zu minimieren

    Mining interesting events on large and dynamic data

    Get PDF
    Nowadays, almost every human interaction produces some form of data. These data are available either to every user, e.g.~images uploaded on Flickr or to users with specific privileges, e.g.~transactions in a bank. The huge amount of these produced data can easily overwhelm humans that try to make sense out of it. The need for methods that will analyse the content of the produced data, identify emerging topics in it and present the topics to the users has emerged. In this work, we focus on emerging topics identification over large and dynamic data. More specifically, we analyse two types of data: data published in social networks like Twitter, Flickr etc.~and structured data stored in relational databases that are updated through continuous insertion queries. In social networks, users post text, images or videos and annotate each of them with a set of tags describing its content. We define sets of co-occurring tags to represent topics and track the correlations of co-occurring tags over time. We split the tags to multiple nodes and make each node responsible of computing the correlations of its assigned tags. We implemented our approach in Storm, a distributed processing engine, and conducted a user study to estimate the quality of our results. In structured data stored in relational databases, top-k group-by queries are defined and an emerging topic is considered to be a change in the top-k results. We maintain the top-k result sets in the presence of updates minimising the interaction with the underlying database. We implemented and experimentally tested our approach.Heutzutage entstehen durch fast jede menschliche Aktion und Interaktion Daten. Fotos werden auf Flickr bereitgestellt, Neuigkeiten über Twitter verbreitet und Kontakte in Linkedin und Facebook verwaltet; neben traditionellen Vorgängen wie Banktransaktionen oder Flugbuchungen, die Änderungen in Datenbanken erzeugen. Solch eine riesige Menge an Daten kann leicht überwältigend sein bei dem Versuch die Essenz dieser Daten zu extrahieren. Neue Methoden werden benötigt, um Inhalt der Daten zu analysieren, neu entstandene Themen zu identifizieren und die so gewonnenen Erkenntnisse dem Benutzer in einer übersichtlichen Art und Weise zu präsentieren. In dieser Arbeit werden Methoden zur Identifikation neuer Themen in großen und dynamischen Datenmengen behandelt. Dabei werden einerseits die veröffentlichten Daten aus sozialen Netzwerken wie Twitter und Flickr und andererseits strukturierte Daten aus relationalen Datenbanken, welche kontinuierlich aktualisiert werden, betrachtet. In sozialen Netzwerken stellen die Benutzer Texte, Bilder oder Videos online und beschreiben diese für andere Nutzer mit Schlagworten, sogenannten Tags. Wir interpretieren Gruppen von zusammen auftretenden Tags als eine Art Thema und verfolgen die Beziehung bzw. Korrelation dieser Tags über einen gewissen Zeitraum. Abrupte Anstiege in der Korrelation werden als Hinweis auf Trends aufgefasst. Die eigentlich Aufgabe, das Zählen von zusammen auftretenden Tags zur Berechnung von Korrelationsmaßen, wird dabei auf eine Vielzahl von Computerknoten verteilt. Die entwickelten Algorithmen wurden in Storm, einem neuartigen verteilten Datenstrommanagementsystem, implementiert und bzgl. Lastbalancierung und anfallender Netzwerklast sorgfältig evaluiert. Durch eine Benutzerstudie wird darüber hinaus gezeigt, dass die Qualität der gewonnenen Trends höher ist als die Qualität der Ergebnisse bestehender Systeme. In strukturierten Daten von relationalen Datenbanksystemen werden Beste-k Ergebnislisten durch Aggregationsanfragen in SQL definiert. Interessant dabei sind eintretende Änderungen in diesen Listen, was als Ereignisse (Trends) aufgefasst wird. In dieser Arbeit werden Methoden präsentiert diese Ergebnislisten möglichst effizient instand zu halten, um Interaktionen mit der eigentlichen Datenbank zu minimieren

    Compilation Techniques for Incremental Collection Processing

    Get PDF
    Many map-reduce frameworks as well as NoSQL systems rely on collection programming as their interface of choice due to its rich semantics along with an easily parallelizable set of primitives. Unfortunately, the potential of collection programming is not entirely fulfilled by current systems as they lack efficient incremental view maintenance (IVM) techniques for queries producing large nested results. This comes as a consequence of the fact that the nesting of collections does not enjoy the same algebraic properties underscoring the optimization potential of typical collection processing constructs. We propose the first solution for the efficient incrementalization of collection programming in terms of its core constructs as captured by the positive nested relational calculus (NRC+) on bags (with integer multiplicities). We take an approach based on delta query derivation, whose goal is to generate delta queries which, given a small change in the input, can update the materialized view more efficiently than via recomputation. More precisely, we model the cost of NRC+ operators and classify queries as efficiently incrementalizable if their delta has a strictly lower cost than full re-evaluation. Then, we identify IncNRC+, a large fragment of NRC+ that is efficiently incrementalizable and we provide a semantics-preserving translation that takes any NRC+ query to a collection of IncNRC+ queries. Furthermore, we prove that incrementalmaintenance for NRC+ is within the complexity class NC0 and we showcase how Recursive IVM, a technique that has provided significant speedups over traditional IVM in the case of flat queries, can also be applied to IncNRC+ . Existing systems are also limited wrt. the size of inner collections that they can effectively handle before running into severe performance bottlenecks. In particular, in the face of nested collections with skewed cardinalities developers typically have to undergo a painful process of manual query re-writes in order to ensure that the largest inner collections in their workloads are not impacted by these limitations. To address these issues we developed SLeNDer, a compilation framework that given a nested query generates a set of semantically equivalent (partially) shredded queries that can be efficiently evaluated and incrementalized using state of the art techniques for handling skew and applying delta changes, respectively. The derived queries expose nested collections to the same opportunities for distributing their processing and incrementally updating their contents as those enjoyed by top-level collections, leading on our benchmark to up to 16.8x and 21.9x speedups in terms of offline and online processing, respectively. In order to enable efficient IVM for the increasingly common case of collection programming with functional values as in Links, we also discuss the efficient incrementalization of simplytyped lambda calculi, under the constraint that their primitives are themselves efficiently incrementalizable

    Incremental Maintenance for Non-Distributive Aggregate Functions

    No full text
    Incremental view maintenance is a well-known topic that has been addressed in the literature as well as implemented in database products

    Risk Assessment and Collaborative Information Awareness for Plan Execution

    Get PDF
    Joint organizational planning and plan execution in risk-prone environment, has seen renewed research interest given its potential for agility and cost reduction. The participants are often asked to quickly plan and execute tasks in partially known or hostile environments. This requires advanced decision support systems for situational response whereby state-of-the-art technologies can be used to handle issues such as plan risk assessment, appropriate information exchange, asset localization and adaptive planning with risk mitigation. Toward this end, this thesis contributes innovative approaches to address these issues, focusing on logistic support over risk-prone transport network as many organizational plans have key logistic components. Plan risk assessment involves property evaluation for vehicle risk exposure, cost bounds and contingency options assessment. Appropriate information exchange involves participant specific shared information awareness under unreliable communication. Asset localization mandates efficient sensor network management. Adaptive planning with risk mitigation entails limited risk exposure replanning, factoring potential vehicle and cargo loss. In this pursuit, this thesis first investigates risk assessment for asset movement and contingency valuation using probabilistic model-checking and decision trees, followed by elaborating a gossip based protocol for hierarchy-aware shared information awareness, also assessed via probabilistic model-checking. Then, the thesis proposes an evolutionary learning heuristic for efficiently managing sensor networks constrained in terms of sensor range, capacity and energy use. Finally, the thesis presents a learning based heuristic for cost effective adaptive logistic planning with risk mitigation. Instructive case studies are also provided for each contribution along with benchmark results evaluating the performance of the proposed heuristic techniques
    corecore