27 research outputs found

    Materializing aaseline views for deviation detection exploratory OLAP

    Get PDF
    The final publication is available at link.springer.comAlert-raising and deviation detection in OLAP and explora-tory search concerns calling the user’s attention to variations and non-uniform data distributions, or directing the user to the most interesting exploration of the data. In this paper, we are interested in the ability of a data warehouse to monitor continuously new data, and to update accordingly a particular type of materialized views recording statistics, called baselines. It should be possible to detect deviations at various levels of aggregation, and baselines should be fully integrated into the database. We propose Multi-level Baseline Materialized Views (BMV), including the mechanisms to build, refresh and detect deviations. We also propose an incremental approach and formula for refreshing baselines efficiently. An experimental setup proves the concept and shows its efficiency.Peer ReviewedPostprint (author's final draft

    View recommendation for visual data exploration

    Get PDF

    An enhanced sequential exception technique for semantic-based text anomaly detection

    Get PDF
    The detection of semantic-based text anomaly is an interesting research area which has gained considerable attention from the data mining community. Text anomaly detection identifies deviating information from general information contained in documents. Text data are characterized by having problems related to ambiguity, high dimensionality, sparsity and text representation. If these challenges are not properly resolved, identifying semantic-based text anomaly will be less accurate. This study proposes an Enhanced Sequential Exception Technique (ESET) to detect semantic-based text anomaly by achieving five objectives: (1) to modify Sequential Exception Technique (SET) in processing unstructured text; (2) to optimize Cosine Similarity for identifying similar and dissimilar text data; (3) to hybridize modified SET with Latent Semantic Analysis (LSA); (4) to integrate Lesk and Selectional Preference algorithms for disambiguating senses and identifying text canonical form; and (5) to represent semantic-based text anomaly using First Order Logic (FOL) and Concept Network Graph (CNG). ESET performs text anomaly detection by employing optimized Cosine Similarity, hybridizing LSA with modified SET, and integrating it with Word Sense Disambiguation algorithms specifically Lesk and Selectional Preference. Then, FOL and CNG are proposed to represent the detected semantic-based text anomaly. To demonstrate the feasibility of the technique, four selected datasets namely NIPS data, ENRON, Daily Koss blog, and 20Newsgroups were experimented on. The experimental evaluation revealed that ESET has significantly improved the accuracy of detecting semantic-based text anomaly from documents. When compared with existing measures, the experimental results outperformed benchmarked methods with an improved F1-score from all datasets respectively; NIPS data 0.75, ENRON 0.82, Daily Koss blog 0.93 and 20Newsgroups 0.97. The results generated from ESET has proven to be significant and supported a growing notion of semantic-based text anomaly which is increasingly evident in existing literatures. Practically, this study contributes to topic modelling and concept coherence for the purpose of visualizing information, knowledge sharing and optimized decision making

    Query-Time Data Integration

    Get PDF
    Today, data is collected in ever increasing scale and variety, opening up enormous potential for new insights and data-centric products. However, in many cases the volume and heterogeneity of new data sources precludes up-front integration using traditional ETL processes and data warehouses. In some cases, it is even unclear if and in what context the collected data will be utilized. Therefore, there is a need for agile methods that defer the effort of integration until the usage context is established. This thesis introduces Query-Time Data Integration as an alternative concept to traditional up-front integration. It aims at enabling users to issue ad-hoc queries on their own data as if all potential other data sources were already integrated, without declaring specific sources and mappings to use. Automated data search and integration methods are then coupled directly with query processing on the available data. The ambiguity and uncertainty introduced through fully automated retrieval and mapping methods is compensated by answering those queries with ranked lists of alternative results. Each result is then based on different data sources or query interpretations, allowing users to pick the result most suitable to their information need. To this end, this thesis makes three main contributions. Firstly, we introduce a novel method for Top-k Entity Augmentation, which is able to construct a top-k list of consistent integration results from a large corpus of heterogeneous data sources. It improves on the state-of-the-art by producing a set of individually consistent, but mutually diverse, set of alternative solutions, while minimizing the number of data sources used. Secondly, based on this novel augmentation method, we introduce the DrillBeyond system, which is able to process Open World SQL queries, i.e., queries referencing arbitrary attributes not defined in the queried database. The original database is then augmented at query time with Web data sources providing those attributes. Its hybrid augmentation/relational query processing enables the use of ad-hoc data search and integration in data analysis queries, and improves both performance and quality when compared to using separate systems for the two tasks. Finally, we studied the management of large-scale dataset corpora such as data lakes or Open Data platforms, which are used as data sources for our augmentation methods. We introduce Publish-time Data Integration as a new technique for data curation systems managing such corpora, which aims at improving the individual reusability of datasets without requiring up-front global integration. This is achieved by automatically generating metadata and format recommendations, allowing publishers to enhance their datasets with minimal effort. Collectively, these three contributions are the foundation of a Query-time Data Integration architecture, that enables ad-hoc data search and integration queries over large heterogeneous dataset collections

    Processamento analítico espacial e exploratório integrando dados estruturados e semiestruturados.

    Get PDF
    Tecnologias de Business Intelligence (BI) têm sido utilizadas com sucesso para fins de análise de dados. Tradicionalmente, essa análise é realizada em um contexto restrito e bem controlado, onde as fontes de dados são estruturadas, periodicamente carregadas, estáticas e totalmente materializadas. Atualmente, há uma diversidade de dados nos mais diversos formatos, a exemplo de RDF (Resource Description Framework), um formato semiestruturado, semanticamente rico e externo à infraestrutura de BI. Embora tal formato seja enriquecido semanticamente, e muitas vezes possua um componente espacial, realizar a análise é um desafio. Nessa perspectiva, uma nova categoria de ferramentas analíticas vem surgindo. As ferramentas exploratórias (Exploratory OLAP), como são conhecidas, se caracterizam pela descoberta, aquisição e integração de dados externos em ambientes comuns de análise. Do nosso conhecimento, até a presente data, existem apenas duas ferramentas exploratórias propostas na literatura e elas apresentam duas grandes limitações: exploram apenas fontes de dados estruturadas; e não há exploração do componente espacial dos dados integrados. São ferramentas exploratórias OLAP, e não ferramentas exploratórias SOLAP. Baseando-se nessas ferramentas, este trabalho propõe uma abordagem exploratória SOLAP que integra dados semiestruturados espaciais semânticos com fontes de dados estruturados espaciais tradicionais. Um sistema, denominado ExpSOLAP, que dá suporte a consultas SOLAP on-line sob as duas fontes de dados foi desenvolvido. Por fim, o sistema ExpSOLAP é avaliado através de um exemplo prático, no contexto da base de dados obtida no Linked Movie Data Base, utilizando RDF e banco de dados relacional. Foram formuladas consultas que validaram a análise convencional e espacial na exploração de ambas fontes de dados.Business Intelligence (BI) technologies have been successfully applied for data analysis purposes. Traditionally, such analysis is performed in well-controlled and restricted context, where data sources are structured, periodically loaded, static and fully materialized. Nowadays, there is a plenty of data in different formats such as the Resource Description Framework (RDF), a semi-structured and semantically rich format external to the BI infrastructure. Although such data formats are enriched by semantics and contains a spatial data component, performing data analysis is challenging. As a result, the Exploratory OLAP field has emerged for discovery, acquisition, integration and query such data, aiming at performing a complete and effective analysis on both internal and external data. To the best of our knowledge, there are only two exploratory tools proposed in the literature and they have two major limitations due to only structured data sources can be explored and there is no exploration of the spatial component of the integrated data. While they are exploratory OLAP tools, they are not exploratory SOLAP tools. Based on these tools, this work proposes an Exploratory SOLAP approach that integrates semantic spatial semi-structured data with traditional spatial structured data sources. A system named ExpSOLAP, which supports online SOLAP queries on both data sources, was developed. Finally, a case study was carried out in order to evaluate the ExpSOLAP system based on a dataset originating from the Linked Movie Data Base and using RDF and relational datasets. The formulated queries enabled to validate the conventional and spatial analysis from both data sources.CNP

    Doctor of Philosophy

    Get PDF
    dissertationLinked data are the de-facto standard in publishing and sharing data on the web. To date, we have been inundated with large amounts of ever-increasing linked data in constantly evolving structures. The proliferation of the data and the need to access and harvest knowledge from distributed data sources motivate us to revisit several classic problems in query processing and query optimization. The problem of answering queries over views is commonly encountered in a number of settings, including while enforcing security policies to access linked data, or when integrating data from disparate sources. We approach this problem by efficiently rewriting queries over the views to equivalent queries over the underlying linked data, thus avoiding the costs entailed by view materialization and maintenance. An outstanding problem of query rewriting is the number of rewritten queries is exponential to the size of the query and the views, which motivates us to study problem of multiquery optimization in the context of linked data. Our solutions are declarative and make no assumption for the underlying storage, i.e., being store-independent. Unlike relational and XML data, linked data are schema-less. While tracking the evolution of schema for linked data is hard, keyword search is an ideal tool to perform data integration. Existing works make crippling assumptions for the data and hence fall short in handling massive linked data with tens to hundreds of millions of facts. Our study for keyword search on linked data brought together the classical techniques in the literature and our novel ideas, which leads to much better query efficiency and quality of the results. Linked data also contain rich temporal semantics. To cope with the ever-increasing data, we have investigated how to partition and store large temporal or multiversion linked data for distributed and parallel computation, in an effort to achieve load-balancing to support scalable data analytics for massive linked data

    Querying heterogeneous data in an in-situ unified agile system

    Get PDF
    Data integration provides a unified view of data by combining different data sources. In today’s multi-disciplinary and collaborative research environments, data is often produced and consumed by various means, multiple researchers operate on the data in different divisions to satisfy various research requirements, and using different query processors and analysis tools. This makes data integration a crucial component of any successful data intensive research activity. The fundamental difficulty is that data is heterogeneous not only in syntax, structure, and semantics, but also in the way it is accessed and queried. We introduce QUIS (QUery In-Situ), an agile query system equipped with a unified query language and a federated execution engine. It is capable of running queries on heterogeneous data sources in an in-situ manner. Its language provides advanced features such as virtual schemas, heterogeneous joins, and polymorphic result set representation. QUIS utilizes a federation of agents to transform a given input query written in its language to a (set of) computation models that are executable on the designated data sources. Federative query virtualization has the disadvantage that some aspects of a query may not be supported by the designated data sources. QUIS ensures that input queries are always fully satisfied. Therefore, if the target data sources do not fulfill all of the query requirements, QUIS detects the features that are lacking and complements them in a transparent manner. QUIS provides union and join capabilities over an unbound list of heterogeneous data sources; in addition, it offers solutions for heterogeneous query planning and optimization. In brief, QUIS is intended to mitigate data access heterogeneity through query virtualization, on-the-fly transformation, and federated execution. It offers in-Situ querying, agile querying, heterogeneous data source querying, unifeied execution, late-bound virtual schemas, and Remote execution
    corecore