399 research outputs found

    CubiST++: Evaluating Ad-Hoc CUBE Queries Using Statistics Trees

    Get PDF
    We report on a new, efficient encoding for the data cube, which results in a drastic speed-up of OLAP queries that aggregate along any combination of dimensions over numerical and categorical attributes. We are focusing on a class of queries called cube queries, which return aggregated values rather than sets of tuples. Our approach, termed CubiST++ (Cubing with Statistics Trees Plus Families), represents a drastic departure from existing relational (ROLAP) and multi-dimensional (MOLAP) approaches in that it does not use the view lattice to compute and materialize new views from existing views in some heuristic fashion. Instead, CubiST++ encodes all possible aggregate views in the leaves of a new data structure called statistics tree (ST) during a one-time scan of the detailed data. In order to optimize the queries involving constraints on hierarchy levels of the underlying dimensions, we select and materialize a family of candidate trees, which represent superviews over the different hierarchical levels of the dimensions. Given a query, our query evaluation algorithm selects the smallest tree in the family, which can provide the answer. Extensive evaluations of our prototype implementation have demonstrated its superior run-time performance and scalability when compared with existing MOLAP and ROLAP systems

    An architecture for recycling intermediates in a column-store

    Get PDF
    Automatic recycling intermediate results to improve both query response time and throughput is a grand c

    Heap Abstractions for Static Analysis

    Full text link
    Heap data is potentially unbounded and seemingly arbitrary. As a consequence, unlike stack and static memory, heap memory cannot be abstracted directly in terms of a fixed set of source variable names appearing in the program being analysed. This makes it an interesting topic of study and there is an abundance of literature employing heap abstractions. Although most studies have addressed similar concerns, their formulations and formalisms often seem dissimilar and some times even unrelated. Thus, the insights gained in one description of heap abstraction may not directly carry over to some other description. This survey is a result of our quest for a unifying theme in the existing descriptions of heap abstractions. In particular, our interest lies in the abstractions and not in the algorithms that construct them. In our search of a unified theme, we view a heap abstraction as consisting of two features: a heap model to represent the heap memory and a summarization technique for bounding the heap representation. We classify the models as storeless, store based, and hybrid. We describe various summarization techniques based on k-limiting, allocation sites, patterns, variables, other generic instrumentation predicates, and higher-order logics. This approach allows us to compare the insights of a large number of seemingly dissimilar heap abstractions and also paves way for creating new abstractions by mix-and-match of models and summarization techniques.Comment: 49 pages, 20 figure

    Forecasting in Database Systems

    Get PDF
    Time series forecasting is a fundamental prerequisite for decision-making processes and crucial in a number of domains such as production planning and energy load balancing. In the past, forecasting was often performed by statistical experts in dedicated software environments outside of current database systems. However, forecasts are increasingly required by non-expert users or have to be computed fully automatically without any human intervention. Furthermore, we can observe an ever increasing data volume and the need for accurate and timely forecasts over large multi-dimensional data sets. As most data subject to analysis is stored in database management systems, a rising trend addresses the integration of forecasting inside a DBMS. Yet, many existing approaches follow a black-box style and try to keep changes to the database system as minimal as possible. While such approaches are more general and easier to realize, they miss significant opportunities for improved performance and usability. In this thesis, we introduce a novel approach that seamlessly integrates time series forecasting into a traditional database management system. In contrast to flash-back queries that allow a view on the data in the past, we have developed a Flash-Forward Database System (F2DB) that provides a view on the data in the future. It supports a new query type - a forecast query - that enables forecasting of time series data and is automatically and transparently processed by the core engine of an existing DBMS. We discuss necessary extensions to the parser, optimizer, and executor of a traditional DBMS. We furthermore introduce various optimization techniques for three different types of forecast queries: ad-hoc queries, recurring queries, and continuous queries. First, we ease the expensive model creation step of ad-hoc forecast queries by reducing the amount of processed data with traditional sampling techniques. Second, we decrease the runtime of recurring forecast queries by materializing models in a specialized index structure. However, a large number of time series as well as high model creation and maintenance costs require a careful selection of such models. Therefore, we propose a model configuration advisor that determines a set of forecast models for a given query workload and multi-dimensional data set. Finally, we extend forecast queries with continuous aspects allowing an application to register a query once at our system. As new time series values arrive, we send notifications to the application based on predefined time and accuracy constraints. All of our optimization approaches intend to increase the efficiency of forecast queries while ensuring high forecast accuracy

    Efficient Online Processing for Advanced Analytics

    Get PDF
    With the advent of emerging technologies and the Internet of Things, the importance of online data analytics has become more pronounced. Businesses and companies are adopting approaches that provide responsive analytics to stay competitive in the global marketplace. Online analytics allow data analysts to promptly react to patterns or to gain preliminary insights from early results that aid in research, decision making, and effective strategy planning. The growth of data-velocity in a variety of domains including, high-frequency trading, social networks, infrastructure monitoring, and advertising require adopting online engines that can efficiently process continuous streams of data. This thesis presents foundations, techniques, and systems' design that extend the state-of-the-art in online query processing to efficiently support relational joins with arbitrary join-predicates (beyond traditional equi-joins); and to support other data models (beyond relational) that target machine learning and graph computations. The thesis is divided into two parts: We first present a brief overview of Squall, our open-source online query processing engine that supports SQL-like queries on top of streams. Then, we focus on extending Squall to support efficient theta-join processing. Scalable distributed join processing requires a partitioning policy that evenly distributes the processing load while minimizing the size of maintained state and duplicated messages. Efficient load-balance demands apriori-statistics which are not available in the online setting. We propose a novel operator that continuously adjusts itself to the data dynamics, through adaptive dataflow routing and state repartitioning. It is also resilient to data-skew, maintains high throughput rates, avoids blocking during state repartitioning, and behaves as a black-box dataflow operator with provable performance guarantees. Our evaluation demonstrates that the proposed operator outperforms the state-of-the-art static partitioning schemes in resource utilization, throughput, and execution time up to 7x. In the second part, we present a novel framework that supports the Incremental View Maintenance (IVM) of workloads expressed as linear algebra programs. Linear algebra represents a concrete substrate for advanced analytical tasks including, machine learning, scientific computation, and graph algorithms. Previous works on relational calculus IVM are not applicable to matrix algebra workloads. This is because a single entry change to an input-matrix results in changes all over the intermediate views, rendering IVM useless in comparison to re-evaluation. We present Lago, a unified modular compiler framework that supports the IVM of a broad class of linear algebra programs. Lago automatically derives and optimizes incremental trigger programs of analytical computations, while freeing the user from erroneous manual derivations, low-level implementation details, and performance tuning. We present a novel technique that captures Δ\Delta changes as low-rank matrices. Low-rank matrices are representable in a compressed factored form that enables cheaper computations. Lago automatically propagates the factored representation across program statements to derive an efficient trigger program. Moreover, Lago extends its support to other domains that use different semi-ring configurations, e.g., graph applications. Our evaluation results demonstrate orders of magnitude (10x-1

    Sampling Algorithms for Evolving Datasets

    Get PDF
    Perhaps the most flexible synopsis of a database is a uniform random sample of the data; such samples are widely used to speed up the processing of analytic queries and data-mining tasks, to enhance query optimization, and to facilitate information integration. Most of the existing work on database sampling focuses on how to create or exploit a random sample of a static database, that is, a database that does not change over time. The assumption of a static database, however, severely limits the applicability of these techniques in practice, where data is often not static but continuously evolving. In order to maintain the statistical validity of the sample, any changes to the database have to be appropriately reflected in the sample. In this thesis, we study efficient methods for incrementally maintaining a uniform random sample of the items in a dataset in the presence of an arbitrary sequence of insertions, updates, and deletions. We consider instances of the maintenance problem that arise when sampling from an evolving set, from an evolving multiset, from the distinct items in an evolving multiset, or from a sliding window over a data stream. Our algorithms completely avoid any accesses to the base data and can be several orders of magnitude faster than algorithms that do rely on such expensive accesses. The improved efficiency of our algorithms comes at virtually no cost: the resulting samples are provably uniform and only a small amount of auxiliary information is associated with the sample. We show that the auxiliary information not only facilitates efficient maintenance, but it can also be exploited to derive unbiased, low-variance estimators for counts, sums, averages, and the number of distinct items in the underlying dataset. In addition to sample maintenance, we discuss methods that greatly improve the flexibility of random sampling from a system's point of view. More specifically, we initiate the study of algorithms that resize a random sample upwards or downwards. Our resizing algorithms can be exploited to dynamically control the size of the sample when the dataset grows or shrinks; they facilitate resource management and help to avoid under- or oversized samples. Furthermore, in large-scale databases with data being distributed across several remote locations, it is usually infeasible to reconstruct the entire dataset for the purpose of sampling. To address this problem, we provide efficient algorithms that directly combine the local samples maintained at each location into a sample of the global dataset. We also consider a more general problem, where the global dataset is defined as an arbitrary set or multiset expression involving the local datasets, and provide efficient solutions based on hashing

    Efficient Scalable Accurate Regression Queries in In-DBMS Analytics

    Get PDF
    Recent trends aim to incorporate advanced data analytics capabilities within DBMSs. Linear regression queries are fundamental to exploratory analytics and predictive modeling. However, computing their exact answers leaves a lot to be desired in terms of efficiency and scalability. We contribute a novel predictive analytics model and associated regression query processing algorithms, which are efficient, scalable and accurate. We focus on predicting the answers to two key query types that reveal dependencies between the values of different attributes: (i) mean-value queries and (ii) multivariate linear regression queries, both within specific data subspaces defined based on the values of other attributes. Our algorithms achieve many orders of magnitude improvement in query processing efficiency and nearperfect approximations of the underlying relationships among data attributes

    Delta-based Storage and Querying for Versioned Datasets

    Get PDF
    Data-driven methods and products are becoming increasingly common in a variety of communities, leading to a huge diversity of datasets being continuously generated, modified, and analyzed. An increasingly important consideration for the underlying data management systems is that, all of these datasets and their versions over time need to be stored and queried for a variety of reasons including, but not limited to, reproducibility, collaboration, provenance, auditing, introspective analysis, and backups. However, most solutions today resort to highly ad hoc and manual version management and sharing techniques, that leads to friction when managing collaborative data science workflows, while also introducing inefficiencies. In this dissertation, we introduce a framework for dataset version management, and address the systems building, operator design, and optimization challenges involved in building a dataset version control system. We describe the various challenges and solutions in the context of our system, called DEX, that we have developed to support increasingly complex version management tasks. We show how to use delta-encoding, a key component in managing redundancy, to provide efficient storage and retrieval for the thousands of dataset versions, and develop a formalism to understand the various trade-offs in a principled manner. We study the storage--recreation trade-off in detail and provide a suite of inexpensive heuristics to obtain high-quality solutions under different settings. In order to provide a rich interface to specify version management tasks, we design a new query language, called VQUEL, with the ability to query dataset versions and provenance in a unified manner. We study how assumptions on the delta format can help in the design of a logical algebra, which we then use to execute increasingly complex queries efficiently. A key characteristic of our query execution methods is that the computational cost is primarily dependent on the size and the number of deltas in the expression (typically small), and not the input dataset versions (which can be very large). Finally, we demonstrate the effectiveness of our developed techniques by extensive evaluation of DEX on a mixture of real-world and synthetic datasets

    Aspects of Semantic ETL

    Get PDF

    Aspects of semantic ETL

    Get PDF
    Tesi en modalitat de cotutela: Universitat Politècnica de Catalunya i Aalborg UniversitetBusiness Intelligence tools support making better business decisions by analyzing available organizational data. Data Warehouses (DWs), typically structured with the Multidimensional (MD) model, are used to store data from different internal and external sources processed using Extract-Transformation-Load (ETL) processes. On-Line analytical Processing (OLAP) queries are applied on DWs to derive important business-critical knowledge. DW and OLAP technologies perform efficiently when they are applied on data that are static in nature and well organized in structure. Nowadays, Semantic Web technologies and the Linked Data principles inspire organizations to publish their semantic data, which allow machines to understand the meaning of data, using the Resource Description Framework (RDF) model. In addition to traditional (non-semantic) data sources, the incorporation of semantic data sources into a DW raises the additional challenges of schema derivation, semantic heterogeneity, and schema and data management model over traditional ETL tools. Furthermore, most SW data provided by business, academic and governmental organizations include facts and figures, which raise new requirements for BI tools to enable OLAP-like analyses over those semantic (RDF) data. In this thesis, we 1) propose a layer-based ETL framework for handling diverse semantic and non-semantic data sources by addressing the challenges mentioned above, 2) propose a set of high-level ETL constructs for processing semantic data, 3) implement appropriate environments (both programmable and GUI) to facilitate ETL processes and evaluate the proposed solutions. Our ETL framework is a semantic ETL framework because it integrates data semantically. We propose SETL, a unified framework for semantic ETL. The framework is divided into three layers: the Definition Layer, ETL Layer, and Data Warehouse Layer. In the Definition Layer, the semantic DW (SDW) schema, sources, and the mappings among the sources and the target are defined. In the ETL Layer, ETL processes to populate the SDW from sources are designed. The Data Warehouse Layer manages the storage of transformed semantic data. The framework supports the inclusion of semantic (RDF) data in DWs in addition to relational data. It allows users to define an ontology of a DW and annotate it with MD constructs (such as dimensions, cubes, levels, etc.) using the Data Cube for OLAP (QB4OLAP) vocabulary. It supports traditional transformation operations and provides a method to generate semantic data from the source data according to the semantics encoded in the ontology. It also provides a method to connect internal SDW data with external knowledge bases. On top of SETL, we propose SETLCONSTUCT where we define a set of high-level ETL tasks/operations to process semantic data sources. We divide the integration process into two layers: the Definition Layer and Execution Layer. The Definition Layer includes two tasks that allow DW designers to define target (SDW) schemas and the mappings between (intermediate) sources and the (intermediate) target. To create mappings among the sources and target constructs, we provide a mapping vocabulary called S2TMAP. Different from other ETL tools, we propose a new paradigm: we characterize the ETL flow transformations at the Definition Layer instead of independently within each ETL operation (in the Execution Layer). This way, the designer has an overall view of the process, which generates metadata (the mapping file) that the ETL operators will read and parametrize themselves with automatically. In the Execution Layer, we propose a set of high-level ETL operations to process semantic data sources. Finally, we develop a GUI-based semantic BI system SETLBI to define, process, integrate, and query semantic and non-semantic data. In addition to the Definition Layer and the ETL Layer, SETLBI has the OLAP Layer, which provides an interactive interface to enable OLAP analysis over the semantic DWLes eines d’Intel·ligència Empresarial (BI), conegudes en anglès com Business Intelligence, donen suport a la millora de la presa de decisions empresarials mitjançant l’anàlisi de les dades de l’organització disponibles. Els magatzems de dades, o data warehouse, (DWs), típicament estructurats seguint el model Multidimensional (MD), s’utilitzen per emmagatzemar dades de diferents fonts, tant internes com externes, processades mitjançant processos Extract- Transformation-Load (ETL). Les consultes de processament analític en línia (OLAP) s’apliquen als DW per extraure coneixement crític en l’àmbit empresarial. Els DW i les tecnologies OLAP funcionen de manera eficient quan s’apliquen sobre dades de natura estàtica i ben estructurades. Avui en dia, les tecnologies de la Web Semàntica (SW) i els principis Linked Data (LD) inspiren les organitzacions per publicar les seves dades en formats semàntics, que permeten que les màquines entenguin el significat de les dades, mitjançant el llenguatge de descripció de recursos (RDF). Una de les raons per les quals les dades semàntiques han tingut tant d’èxit és que es poden gestionar i fer que estiguin disponibles per tercers amb poc esforç, i no depenen d’esquemes de dades sofisticats. A més de les fonts de dades tradicionals (no semàntiques), la incorporació de fonts de dades semàntiques en un DW planteja reptes addicionals tals com derivar-hi esquema, l’heterogeneïtat semàntica i la representació de l’esquema i les dades a través d’eines d’ETL. A més, la majoria de dades SW proporcionades per empreses, organitzacions acadèmiques o governamentals inclouen fets i figures que representen nous reptes per les eines de BI per tal d’habilitar l’anàlisi OLAP sobre dades semàntiques (RDF). En aquesta tesi, 1) proposem un marc ETL basat en capes per a la gestió de diverses fonts de dades semàntiques i no semàntiques i adreçant els reptes esmentats anteriorment, 2) proposem un conjunt d’operacions ETL per processar dades semàntiques, i 3) la creació d’entorns apropiats de desenvolupament (programàtics i GUIs) per facilitar la creació i gestió de DW i processos ETL semàntics, així com avaluar les solucions proposades. El nostre marc ETL és un marc ETL semàntic perquè Es capaç de considerar e integrar dades de forma semàntica. Els següents paràgrafs elaboren sobre aquests contribucions. Proposem SETL, un marc unificat per a ETL semàntic. El marc es divideix en tres capes: la capa de definició, la capa ETL i la capa DW. A la capa de definició, es defineixen l’esquema del DW semàntic (SDW), les fonts i els mappings entre les fonts i l’esquema del DW. A la capa ETL, es dissenyen processos ETL per popular el SDW a partir de fonts. A la capa DW, es gestiona l’emmagatzematge de les dades semàntiques transformades. El nostre marc dóna suport a la inclusió de dades semàntiques (RDF) en DWs, a més de dades relacionals. Així, permet als usuaris definir una ontologia d’un DW i anotar-la amb construccions MD (com ara dimensions, cubs, nivells, etc.) utilitzant el vocabulari Data Cube for OLAP (QB4OLAP). També admet operacions de transformació tradicionals i proporciona un mètode per generar semàntica de les dades d’origen segons la semàntica codificada al document ontologia. També proporciona un mètode per connectar l’SDW amb bases de coneixement externes. Per tant, crea una base de coneixement, composta per un ontologia i les seves instàncies, on les dades estan connectades semànticament amb altres dades externes / internes. Per fer-ho, desenvolupem un mètode programàtic, basat en Python, d’alt nivell, per realitzar les tasques esmentades anteriorment. S’ha portat a terme un experiment complet d’avaluació comparant SETL amb una solució elaborada amb eines tradicional (que requereixen molta més codificació). Com a cas d’ús, hem emprat el Danish Agricultural dataset, i els resultats mostren que SETL proporciona un millor rendiment, millora la productivitat del programador i la qualitat de la base de coneixement. La comparació entre SETL i Pentaho Data Integration (PDI) mostra que SETL és un 13,5% més ràpid que PDI. A més de ser més ràpid que PDI, tracta les dades semàntiques com a ciutadans de primera classe, mentre que PDI no conté operadors específics per a dades semàntiques. A sobre de SETL, proposem SETLCONSTUCT on definim un conjunt de tasques d’alt nivell / operacions ETL per processar fonts de dades semàntiques i orientades a encapsular i facilitar la creació de l’ETL semàntic. Dividim el procés d’integració en dues capes: la capa de definició i la capa d’execució. La capa de definició inclou dues tasques que permeten definir als dissenyadors de DW esquemes destí (SDW) i mappings entre fonts (o resultats intermedis) i l’SDW (potencialment, altres resultats intermedis). Per crear mappings entre les fonts i el SDW, proporcionem un vocabulari de mapping anomenat Source-To-Target Mapping (S2TMAP). A diferència d’altres eines ETL, proposem un nou paradigma: les transformacions del flux ETL es caracteritzen a la capa de definició, i no de forma independent dins de cada operació ETL (a la capa d’execució). Aquest nou paradigma permet al dissenyador tenir una visió global del procés, que genera metadades (el fitxer de mapping) que els operadors ETL individuals llegiran i es parametritzaran automàticament. A la capa d’execució proposem un conjunt d’operacions ETL d’alt nivell per processar fonts de dades semàntiques. A més de la neteja, la unió i la transformació per dades semàntiques, proposem operacions per generar semàntica multidimensional i actualitzar el SDW per reflectir els canvis en les fonts. A més, ampliem SETLCONSTRUCT per permetre la generació automàtica de flux d’execució ETL (l’anomenem SETLAUTO). Finalment, proporcionem una àmplia avaluació per comparar la productivitat, el temps de desenvolupament i el rendiment de SETLCONSTRUCT i SETLAUTO amb el marc anterior SETL. L’avaluació demostra que SETLCONSTRUCT millora considerablement sobre SETL en termes de productivitat, temps de desenvolupament i rendiment. L’avaluació mostra que 1) SETLCONSTRUCT utilitza un 92% menys de caràcters mecanografiats (NOTC) que SETL, i SETLAUTO redueix encara més el nombre de conceptes usats (NOUC) un altre 25%; 2) utilitzant SETLCONSTRUCT, el temps de desenvolupament es redueix gairebé a la meitat en comparació amb SETL, i es redueix un altre 27 % mitjançant SETLAUTO; 3) SETLCONSTRUCT es escalable i té un rendiment similar en comparació amb SETL. Finalment, desenvolupem un sistema de BI semàntic basat en GUI SETLBI per definir, processar, integrar i consultar dades semàntiques i no semàntiques. A més de la capa de definició i de la capa ETL, SETLBI té una capa OLAP, que proporciona una interfície interactiva per permetre l’anàlisi OLAP d’autoservei sobre el DW semàntic. Cada capa està composada per un conjunt d’operacions / tasques. Per formalitzar les connexions intra i inter-capes dels components de cada capa, emprem una ontologia. La capa ETL amplia l’execució de la capa de SETLCONSTUCT afegint operacions per processar fonts de dades no semàntiques. Per últim, demostrem el sistema final mitjançant el cens de la població de Bangladesh (2011). La solució final d’aquesta tesi és l’eina SETLBI . SETLBI facilita (1) als dissenyadors del DW amb pocs / sense coneixements de SW, integrar semànticament les dades (semàntiques o no) i analitzar-les emprant OLAP, i (2) als usuaris de la SW els permet definir vistes sobre dades semàntiques, integrar-les amb fonts no semàntiques, i visualitzar-les segons el model MD i fer anàlisi OLAP. A més, els usuaris SW poden enriquir l’esquema SDW generat amb construccions RDFS / OWL. Prenent aquest marc com a punt de partida, els investigadors poden emprar-lo per a crear SDWs de forma interactiva i automàtica. Aquest projecte crea un pont entre les tecnologies BI i SW, i obre la porta a altres oportunitats de recerca com desenvolupar tècniques de DW i ETL comprensibles per les màquines.(Danskere) Business Intelligence (BI) værktøjer understøtter at tage bedre forretningsbeslutninger, ved at analysere tilgængelige organisatoriske data. Data Warehouses (DWs), typisk konstrueret med den Multidimensionelle (MD) model, bruges til at lagre data fra forskellige interne og eksterne kilder, der behandles ved hjælp af Extract-Transformation-Load (ETL) processer. On-Line Analytical Processing (OLAP) forespørgsler anvendes på DWs for at udlede vigtig forretningskritisk viden. DW og OLAP-teknologier fungerer effektivt, når de anvendes på data, som er statiske af natur og velorganiseret i struktur. I dag inspirerer Semantic Web (SW) teknologier og Linked Data (LD) principper organisationer til at offentliggøre deres semantiske data, som tillader maskiner at forstå betydningen af denne, ved hjælp af Resource Description Framework (RDF) modellen. En af grundene til, at semantiske data er blevet succesfuldt, er at styringen og udgivelsen af af dataene er nemt, og ikke er afhængigt af et sofistikeret skema. Ud over problemer ved overførslen af traditionelle (ikke-semantiske) databaser til DWs, opstår yderligere udfordringer ved overførslen af semantiske databaser, såsom skema nedarvning, semantisk heterogenitet samt skemaet for data repræsentation over traditionelle ETL værktøjer. På den anden side udgør en stor del af den semantiske data der bliver offentliggjort af virksomheder, akademikere samt regeringer, af figurer og fakta, der igen giver nye problemstillinger og krav til BI værktøjer, for at gøre OLAP lignende analyser over de semantiske data mulige. I denne afhandling gør vi følgende: 1) foreslår et lag-baseret ETL framework til at håndterer multiple semantiske og ikke-semantiske datakilder, ved at svare på udfordringerne nævnt herover, 2) foreslår en mængde af ETL operationer til at behandle semantisk data, 3) implementerer passende miljøer (både programmerbare samt grafiske brugergrænseflader), for at lette ETL processer og evaluere den foreslåede løsning. Vores ETL framework er et semantisk ETL framework, fordi det integrerer data semantisk. Den følgende sektion forklarer vores bidrag. Vi foreslår SETL, et samlet framework for semantisk ETL. Frameworket er splittet i tre lag: et definitions-lag, et ETL-lag, og et DW-lag. Det semanvii tiske DW (SWD) skema, datakilder, samt sammenhængen mellem datakilder og deres mål, er defineret i definitions-laget. I ETL-laget designes ETLprocesser til at udfylde SDW fra datakilderne. DW-laget administrerer lagring af transformerede semantiske data. Frameworket understøtter inkluderingen af semantiske (RDF) data i DWs ud over relationelle data. Det giver brugerne mulighed for at definere en ontologi for et DW og annotere med MD-konstruktioner (såsom dimensioner, kuber, niveauer osv.) ved hjælp af Data Cube til OLAP (QB4OLAP) ordforrådet. Det understøtter traditionelle transformations operationer, og giver en metode til at generere semantiske data fra de oprindelige data, i henhold til semantikken indkodet i ontologien. Det muliggør også en metode til at forbinde interne SDW data med eksterne vidensbaser. Herved skaber det en vidensbase, der er sammensat af en ontologi og dets instanser, hvor data er semantisk forbundet med andre eksterne / interne data. Vi udvikler et høj niveau Python-baseret programmerbart framework for at udføre de ovennævnte opgaver. En omfattende eksperimentel evaluering, der sammenligner SETL med en traditionel løsning (hvilket krævede meget manuel kodning), om brugen af danske landbrugsog forretnings datasæt, viser at SETL præsterer bedre, programmør produktivitet og vidensbase kvalitet. Sammenligningen mellem SETL og Pentaho Data Integration (PDI) ved behandling af en semantisk kilde viser, at SETL er 13,5% hurtigere end PDI. Udover SETL, foreslår vi SETLCONSTRUCT hvor vi definerer et sæt ETLoperationer på højt niveau til behandling af semantiske datakilder. Vi deler integrationsprocessen i to lag: Definitions-lag og eksekverings-lag. Definitionslaget indeholder to opgaver, der giver DW designere muligheden for at definere (SDW) skemaer, og kortlægningerne mellem kilder og målet. For at oprette kortlægning mellem kilderne og målene, leverer vi et kortlægnings ordforråd kaldet Source-to-Target Mapping (S2TMAP). Forskelligt fra andre ETL-værktøjer foreslår vi et nyt paradigme: vi karakteriserer ETLflowtransformationerne i definitions-laget i stedet for uafhængigt inden for hver ETL-operation (i eksekverings-laget). På denne måde har designeren et overblik over processen, som genererer metadata (kortlægningsfilen), som ETL operatørerne vil læse og parametrisere automatisk. I eksekverings-laget foreslår vi en mængde høj niveau ETL-operationer til at behandle semantiske datakilder. Udover rensning, sammenføjning og datatypebaseret transformationer af semantiske data, foreslår vi operationer til at generere multidimensionel semantik på data-niveau og operationer til at opdatere et SDW for at afspejle ændringer i kilde-dataen. Derudover udvider vi SETLCONSTRUCT for at muliggøre automatisk ETL-eksekveringsstrømgenerering (vi kalder det SETLAUTO). Endelig leverer vi en omfattende evaluering for at sammenligne produktivitet, udviklingstid og ydeevne for scon og SETLAUTO med den tidligere ramme SETL. Evalueringen viser, at SETLCONSTRUCT forbedres markant i forhold til SETL med hensyn til produktivitet, udviklingstid og ydeevne. Evalueringen viser, at 1) SETLCONSTRUCT bruger 92% færre antal indtastede tegn (NOTC) end SETL, og SETLAUTO reducerer antallet af brugte begreber (NOUC) yderligere med 25%; 2) ved at bruge SETLCONSTRUCT, er udviklingstiden næsten halveret sammenlignet med SETL, og skæres med yderligere 27% ved hjælp af SETLAUTO; 3) SETLCONSTRUCT er skalerbar og har lignende ydelse sammenlignet med SETL. Til slut udvikler vi et GUI-baseret semantisk BI system SETLBI for at definere, processere, integrere og lave forespørgsler på semantiske og ikkesemantiske data. Ud over definitions-laget og ETL-laget, har SETLBI et OLAP-lag, som giver en interaktiv grænseflade for at muliggøre selvbetjenings OLAP analyser over det semantiske DW. Hvert lag er sammensat af en mængde operationer/opgaver. Vi udarbejder en ontologi til at formalisere intra-og ekstra-lags forbindelserne mellem komponenterne og lagene. ETLlaget udvider eksekverings-laget af SETLCONSTUCT ved at tilføje operationer til at behandle ikke-semantiske datakilder. Vi demonstrerer systemet ved hjælp af Bangladesh population census 2011 datasættet. Sammenfatningen af denne afhandling er BI-værktøjet SETLBI . SETLBI fremmer (1) DW-designere med ringe / ingen SW-viden til semantisk at integrere semantiske og / eller ikke-semantiske data og analysere det i OLAP stil, og (2) SW brugere med grundlæggende MD-baggrund til at definere MDvisninger over semantiske data, der aktiverer OLAP-lignende analyse. Derudover kan SW-brugere berige det genererede SDW-skema med RDFS / OWLkonstruktioner. Med udgangspunkt i frameworket som et grundlag kan forskere sigte mod at udvikle yderligere interaktive og automatiske integrationsrammer for SDW. Dette projekt bygger bro mellem de traditionelle BIteknologier og SW-teknologier, som igen vil åbne døren for yderligere forskningsmuligheder som at udvikle maskinforståelige ETL og lagerteknikker.Postprint (published version
    • …
    corecore