177 research outputs found

    XML Integrated Environment for Service-Oriented Data Management

    Get PDF
    The proliferation of XML as a family of related standards including a markup language (XML), formatting semantics (XSL style sheets), a linking syntax (XLINK), and appropriate data schema standards have emerged as a de facto standard for encoding and sharing data between various applications. XML is designed to be simple, easily parsed and self-describing. XML is based on and support the idea of separation of concerns: information content is separated from information rendering, and relationships between data elements are provided via simple nesting and references. As the XML content grows, the ability to handle schemaless XML documents becomes more critical as most XML documents do not have schema or Document Type Definitions (DTDs). In addition, XML content and XML tools are often required to be combined in effective ways for better performance and higher flexibility. In this research, we proposed XML Integrated Environment (XIE) which is a general-purpose service-oriented architecture for processing XML documents in a scalable and efficient fashion. The XIE supports a new software service model that provides a proper abstraction to describe a service and divide it into four components: structure, connection, interface and logic. We also proposed and implemented XIE Service Language (XIESL) that can capture the creation and maintenance of the XML processes and the data flow specified by the user and then orchestrates the interactions between different XIE services. Moreover, XIESL manages the complexity of XML processing by implementing an XML processing pipeline that enables better management, control, interpretation and presentation of the XML data even for non-professional users. The XML Integrated Environment is envisioned to revolutionize the way non-professional programmers see, work and manage their XML assets. It offers them powerful tools and constructs to fully utilize the XML processing power embedded in its unified framework and service-oriented architecture

    EXODuS: Exploratory OLAP over Document Stores

    Get PDF
    OLAP has been extensively used for a couple of decades as a data analysis approach to support decision making on enterprise structured data. Now, with the wide diffusion of NoSQL databases holding semi-structured data, there is a growing need for enabling OLAP on document stores as well, to allow non-expert users to get new insights and make better decisions. Unfortunately, due to their schemaless nature, document stores are hardly accessible via direct OLAP querying. In this paper we propose EXODuS, an interactive, schema-on-read approach to enable OLAP querying of document stores in the context of self-service BI and exploratory OLAP. To discover multidimensional hierarchies in document stores we adopt a data-driven approach based on the mining of approximate functional dependencies; to ensure good performances, we incrementally build local portions of hierarchies for the levels involved in the current user query. Users execute an analysis session by expressing well-formed multidimensional queries related by OLAP operations; these queries are then translated into the native query language of MongoDB, one of the most popular document-based DBMS. An experimental evaluation on real-world datasets shows the efficiency of our approach and its compatibility with a real-time setting

    Polymorphic Data Modeling

    Get PDF
    There are currently no data modeling standards for modeling NoSQL document store databases. This work proposes a standard to fill the void. The proposed standard is based on our new data modeling pattern named The Polymorphic Table Pattern. The pattern embraces the “schemaless” nature of document store NoSQL while allowing the data modeler to use his or her existing skillsets. The concepts of our proposed modeling have been demonstrated against MongoDB

    ARDI: automatic generation of RDFS models from heterogeneous data sources

    Get PDF
    The current wealth of information, typically known as Big Data, generates a large amount of available data for organisations. Data Integration provides foundations to query disparate data sources as if they were integrated into a single source. However, current data integration tools are far from being useful for most organisations due to the heterogeneous nature of data sources, which represents a challenge for current frameworks. To enable data integration of highly heterogeneous and disparate data sources, this paper proposes a method to extract the schema from semi-structured (such as JSON and XML) and structured (such as relational) data sources, and generate an equivalent RDFS representation. The output of our method complements current frameworks and reduces the manual workload required to represent the input data sources in terms of the integration canonical data model. Our approach consists of production rules at the meta-model level that guarantee the correctness of the model translations. Finally, a tool for implementing our approach has been developed.Peer ReviewedPostprint (author's final draft

    Benchmarking Big Data OLAP NoSQL Databases

    Get PDF
    With the advent of Big Data, new challenges have emerged regarding the evaluation of decision support systems (DSS). Existing evaluation benchmarks are not configured to handle a massive data volume and wide data diversity. In this paper, we introduce a new DSS benchmark that supports multiple data storage systems, such as relational and Not Only SQL (NoSQL) systems. Our scheme recognizes numerous data models (snowflake, star and flat topologies) and several data formats (CSV, JSON, TBL, XML, etc.). It entails complex data generation characterized within “volume, variety, and velocity” framework (3 V). Next, our scheme enables distributed and parallel data generation. Furthermore, we exhibit some experimental results with KoalaBench

    Accelerating data retrieval steps in XML documents

    Get PDF

    Automatic mapping of XML documents into relational database

    Get PDF
    Extensible Markup Language (XML) nowadays is one of the most important standard media used for exchanging and representing data through the Internet. Storing, updating and retrieving the huge amount of web services data such as XML is an attractive area of research for researchers and database vendors. In this thesis, we propose and develop a new mapping model, called MAXDOR, for storing, rebuilding, updating and querying XML documents using a relational database without making use of any XML schemas in the mapping process. The model addressed the problem of solving the structural hole between ordered hierarchical XML and unordered tabular relational database to enable us to use relational database systems for storing, updating and querying XML data. A multiple link list is used to maintain XML document structure, manage the process of updating document contents and retrieve document contents efficiently. Experiments are done to evaluate MAXDOR model. MAXDOR will be compared with other well-known models available in the literature(Tatarinov et al., 2002) and (Torsten et al., 2004) using total expected value of rebuilding XML document execution time and insertion of token execution time.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Regras de refatoração para banco de dados baseado em grafos

    Get PDF
    Orientador: Luiz Camolesi JúniorDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de TecnologiaResumo: A informação produzida atualmente apresenta crescimento em volume e complexidade, representando um desafio tecnológico que demanda mais do que a atual estrutura de Bancos de Dados Relacionais pode oferecer. Tal fato estimula o uso de diferentes formas de armazenamento, como Bancos de Dados baseados em Grafos (BDG). Os atuais Bancos de Dados baseados em Grafos são adaptados para suportar automaticamente a evolução do banco de dados, mas não fornecem recursos adequados para a organização da informação. Esta função é deixada a cargo das aplicações que acessam o banco de dados, comprometendo a integridade dos dados e sua confiabilidade. O objetivo deste trabalho é a definição de regras de refatoração para auxiliar o gerenciamento da evolução de Bancos de Dados baseados em Grafos. As regras apresentadas neste trabalho são adaptações e extensões de regras de refatoração consolidadas para bancos de dados relacionais para atender às características dos Bancos de Dados baseado em Grafos. O resultado deste trabalho é um catálogo de regras que poderá ser utilizado por desenvolvedores de ferramentas de administração de bancos de dados baseados em grafos para garantir a integridade das operações de evolução de esquemas de dados e consequentemente dos dados relacionadosAbstract: The information produced nowadays does not stop growing in volume and complexity, representing a technological challenge which demands more than the relational model for databases can currently offer. This situation stimulates the use of different forms of storage, such as Graph Databases. Current Graph Databases allow automatic database evolution, but do not provide adequate resources for the information organization. This is mostly left under the responsibility of the applications which access the database, compromising the data integrity and reliability. The goal of this work is the definition of refactoring rules to support the management of the evolution of Graph Databases. The rules presented in this document are adaptations and extensions of the existent refactoring rules for relational databases to meet the requirements of the Graph Databases features. The result of this work is a catalog of refactoring rules that can be used by developers of graph database management tools to guarantee the integrity of the operations of database evolutionMestradoTecnologia e InovaçãoMestra em Tecnologi

    DKVF: A Framework for Rapid Prototyping and Evaluating Distributed Key-value Stores

    Full text link
    We present our framework DKVF that enables one to quickly prototype and evaluate new protocols for key-value stores and compare them with existing protocols based on selected benchmarks. Due to limitations of CAP theorem, new protocols must be developed that achieve the desired trade-off between consistency and availability for the given application at hand. Hence, both academic and industrial communities focus on developing new protocols that identify a different (and hopefully better in one or more aspect) point on this trade-off curve. While these protocols are often based on a simple intuition, evaluating them to ensure that they indeed provide increased availability, consistency, or performance is a tedious task. Our framework, DKVF, enables one to quickly prototype a new protocol as well as identify how it performs compared to existing protocols for pre-specified benchmarks. Our framework relies on YCSB (Yahoo! Cloud Servicing Benchmark) for benchmarking. We demonstrate DKVF by implementing four existing protocols --eventual consistency, COPS, GentleRain and CausalSpartan-- with it. We compare the performance of these protocols against different loading conditions. We find that the performance is similar to our implementation of these protocols from scratch. And, the comparison of these protocols is consistent with what has been reported in the literature. Moreover, implementation of these protocols was much more natural as we only needed to translate the pseudocode into Java (and add the necessary error handling). Hence, it was possible to achieve this in just 1-2 days per protocol. Finally, our framework is extensible. It is possible to replace individual components in the framework (e.g., the storage component)
    corecore