5 research outputs found

    EXCLAIM framework: a monitoring and analysis framework to support self-governance in Cloud Application Platforms

    Get PDF
    The Platform-as-a-Service segment of Cloud Computing has been steadily growing over the past several years, with more and more software developers opting for cloud platforms as convenient ecosystems for developing, deploying, testing and maintaining their software. Such cloud platforms also play an important role in delivering an easily-accessible Internet of Services. They provide rich support for software development, and, following the principles of Service-Oriented Computing, offer their subscribers a wide selection of pre-existing, reliable and reusable basic services, available through a common platform marketplace and ready to be seamlessly integrated into users' applications. Such cloud ecosystems are becoming increasingly dynamic and complex, and one of the major challenges faced by cloud providers is to develop appropriate scalable and extensible mechanisms for governance and control based on run-time monitoring and analysis of (extreme amounts of) raw heterogeneous data. In this thesis we address this important research question -- \textbf{how can we support self-governance in cloud platforms delivering the Internet of Services in the presence of large amounts of heterogeneous and rapidly changing data?} To address this research question and demonstrate our approach, we have created the Extensible Cloud Monitoring and Analysis (EXCLAIM) framework for service-based cloud platforms. The main idea underpinning our approach is to encode monitored heterogeneous data using Semantic Web languages, which then enables us to integrate these semantically enriched observation streams with static ontological knowledge and to apply intelligent reasoning. This has allowed us to create an extensible, modular, and declaratively defined architecture for performing run-time data monitoring and analysis with a view to detecting critical situations within cloud platforms. By addressing the main research question, our approach contributes to the domain of Cloud Computing, and in particular to the area of autonomic and self-managing capabilities of service-based cloud platforms. Our main contributions include the approach itself, which allows monitoring and analysing heterogeneous data in an extensible and scalable manner, the prototype of the EXCLAIM framework, and the Cloud Sensor Ontology. Our research also contributes to the state of the art in Software Engineering by demonstrating how existing techniques from several fields (i.e., Autonomic Computing, Service-Oriented Computing, Stream Processing, Semantic Sensor Web, and Big Data) can be combined in a novel way to create an extensible, scalable, modular, and declaratively defined monitoring and analysis solution

    Emergent relational schemas for RDF

    Get PDF

    Universal Workload-based Graph Partitioning and Storage Adaption for Distributed RDF Stores

    Get PDF
    The publication of machine-readable information has been significantly increasing both in the magnitude and complexity of the embedded relations. The Resource Description Framework(RDF) plays a big role in modeling and linking web data and their relations. In line with that important role, dedicated systems were designed to store and query the RDF data using a special queering language called SPARQL similar to the classic SQL. However, due to the high size of the data, several federated working nodes were used to host a distributed RDF store. The data needs to be partitioned, assigned, and stored in each working node. After partitioning, some of the data needs to be replicated in order to avoid the communication cost, and balance the loads for better system throughput. Since replications require more storage space, the important two questions are: what data to replicate? And how much? The answer to the second question is related to other storage-space requirements at each working node like indexes and cache. In order to efficiently answer SPARQL queries, each working node needs to put its share of data into multiple indexes. Those indexes have a data-wide size and consume a considerable amount of storage space. In this context, the same two questions about replications are also raised about indexes. The third storage-consuming structure is the join cache. It is a special index where the frequent join results are cached and save a considerable amount of running time on the cost of high storage space consumption. Again, the same two questions of replication and indexes are applicable to the join-cache. In this thesis, we present a universal adaption approach to the storage of a distributed RDF store. The system aims to find optimal data assignments to the different indexes, replications, and join cache within the limited storage space. To achieve this, we present a cost model based on the workload that often contains frequent patterns. The workload is dynamically analyzed to evaluate predefined rules. Those rules tell the system about the benefits and costs of assigning which data to what structure. The objective is to have better query execution time. Besides the storage adaption, the system adapts its processing resources with the queries' arrival rate. The aim of this adaption is to have better parallelization per query while still provides high system throughput

    Otimização de consultas SPARQL em bases RDF distribuídas

    Get PDF
    Orientadora : Profa. Dra Carmem Satie HaraTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 07/04/2017Inclui referências : f. 83-85Resumo; O modelo de dados RDF vem sendo usado em diversas aplicações devido a sua simplicidade e exibilidade na modelagem de dados quando comparado aos modelos de dados tradicionais. Dado o grande volume de dados RDF existente atualmente, diversas abordagens de processamento de consultas têm sido propostas visando garantir a escalabilidade destas aplicações. De uma forma geral, estas abordagens propõem métodos de distribuição de dados a _m de promover o processamento distribuído e paralelo de consultas SPARQL em sistemas RDF. Embora a distribuição forneça escalabilidade de armazenamento, o custo de comunicação no processamento de consultas pode ser alto. Este trabalho propõe uma abordagem de processamento de consultas SPARQL que tem o objetivo de minimizar o custo de comunicação para o processamento de consultas em sistemas RDF distribuídos. A abordagem explora a existência de padrões de alocação (PAs) na distribuição de dados, fornecida por um método de distribuição controlada de dados, que determina como triplas RDF são agrupadas e armazenadas em um mesmo servidor. Sendo assim, durante a distribuição, fragmentos de bases RDF seguem a composição de um determinado PA. Logo, a abordagem de processamento proposta gera planos de execução de consultas baseando-se nestes padrões viabilizando a escolha de duas estratégias de comunicação durante o processamento de consultas: get-frag e send-result. Na primeira estratégia, dada uma consulta, um servidor requisita para servidores remotos fragmentos de dados para a resolução de consultas. Na segunda, o servidor envia resultados intermediários da consulta para outros servidores continuarem a sua execução. Essas estratégias são combinadas em um método, denominado de 2ways, que escolhe a estratégia de comunicação adequada sempre que a execução de consultas transitar entre fragmentos de dados. A escolha da estratégia depende do número de mensagens e do volume de dados a ser transmitido entre servidores. Resultados experimentais mostram que 2ways reduz o custo de comunicação de maneira efetiva e melhora o tempo de resposta do processamento de consultas SPARQL em sistemas RDF distribuídos. Por fim, considerando que bases RDF podem ser alteradas por meio de operações de exclusão/interseção de triplas, este trabalho estende a abordagem de processamento proposta considerando que nem sempre novos dados inseridos estarão de acordo com os PAs predefinidos. A abordagem de atualização define um tipo especial de PA, denominado de PaOverow, para o armazenamento de dados que não podem ser categorizados pelos PAs existentes. Logo, o PaOverow também deve ser considerado no planejamento e no processamento de consultas. Um estudo experimental inicial mostra que, como esperado, a adoção do PaOverow pode aumentar o tempo de resposta de consultas na abordagem de processamento proposta. Palavras-chave: RDF, SPARQL, Processamento Distribuído de Consultas, Otimização de Consultas.Abstract: RDF has been used by many applications due to its simplicity and exibility in data modeling. Due to the huge volume of RDF data that exists nowadays, many distributed query processing approaches have been proposed aiming to ensure scalability for these applications. In general, these approaches propose data distribution methods promoting distributed and parallel SPARQL query processing. However, while distribution may provide storage scalability, it may also incur high communication costs for processing queries. This work presents a parallel and distributed query processing approach that aims to minimize the communication cost. The approach explores the existence of data allocation patterns (PAs) for data distribution, provided by a controlled data distribution method, that determine how RDF triples should be grouped and stored on the same server. Fragments of the RDF datastore follow a given allocation pattern. The approach generates execution plans based on this distribution model making possible the choice of two communication strategies for query processing: get-frag and send-result. With the get-frag approach, a server requests remote servers to send fragments that contain data required by a query. The send-result approach, on the other hand, forwards intermediate results to other servers to continue the query processing. These strategies are combined on a method, called 2ways, that chooses the adequate communication strategy whenever queries traverse fragment boundaries. The choice of the communication strategy is based on the number of requisitions and the volume of the data to be transmitted. Experimental results show that our proposed technique e_ectively reduces the communication cost and improves the response time for processing SPARQL queries on a distributed RDF datastore. Finally, considering that RDF datasets are dynamic, and may be updated by delete/insert operations, this work extends the query processing approach considering that not all newly inserted data may conform to the prede_ned allocation patterns. We de_ne a special purpose type of PA, called PaOverow, for storing data that can not be categorized by existing PAs. Consequentelly, the PaOverow must be considered in query planning and processing. An initial experimental study shows that, as expected, the PaOverow adoption can increase the response time for processing queries on the proposed processing approach. Keywords: RDF, SPARQL, Distributed Query Processing, Query Optimization

    Efficient Source Selection For SPARQL Endpoint Query Federation

    Get PDF
    The Web of Data has grown enormously over the last years. Currently, it comprises a large compendium of linked and distributed datasets from multiple domains. Due to the decentralised architecture of the Web of Data, several of these datasets contain complementary data. Running complex queries on this compendium thus often requires accessing data from different data sources within one query. The abundance of datasets and the need for running complex query has thus motivated a considerable body of work on SPARQL query federation systems, the dedicated means to access data distributed over the Web of Data. This thesis addresses two key areas of federated SPARQL query processing: (1) efficient source selection, and (2) comprehensive SPARQL benchmarks to test and ranked federated SPARQL engines as well as triple stores. Efficient Source Selection: Efficient source selection is one of the most important optimization steps in federated SPARQL query processing. An overestimation of query relevant data sources increases the network traffic, result in irrelevant intermediate results, and can significantly affect the overall query processing time. Previous works have focused on generating optimized query execution plans for fast result retrieval. However, devising source selection approaches beyond triple pattern-wise source selection has not received much attention. Similarly, only little attention has been paid to the effect of duplicated data on federated querying. This thesis presents HiBISCuS and TBSS, novel hypergraph-based source selection approaches, and DAW, a duplicate-aware source selection approach to federated querying over the Web of Data. Each of these approaches can be combined directly with existing SPARQL query federation engines to achieve the same recall while querying fewer data sources. We combined the three (HiBISCuS, DAW, and TBSS) source selections approaches with query rewriting to form a complete SPARQL query federation engine named Quetsal. Furthermore, we present TopFed, a Cancer Genome Atlas (TCGA) tailored federated query processing engine that exploits the data distribution to perform intelligent source selection while querying over large TCGA SPARQL endpoints. Finally, we address the issue of rights managements and privacy while accessing sensitive resources. To this end, we present SAFE: a global source selection approach that enables decentralised, policy-aware access to sensitive clinical information represented as distributed RDF Data Cubes. Comprehensive SPARQL Benchmarks: Benchmarking is indispensable when aiming to assess technologies with respect to their suitability for given tasks. While several benchmarks and benchmark generation frameworks have been developed to evaluate federated SPARQL engines and triple stores, they mostly provide a one-fits-all solution to the benchmarking problem. This approach to benchmarking is however unsuitable to evaluate the performance of a triple store for a given application with particular requirements. The fitness of current SPARQL query federation approaches for real applications is difficult to evaluate with current benchmarks as current benchmarks are either synthetic or too small in size and complexity. Furthermore, state-of-the-art federated SPARQL benchmarks mostly focused on a single performance criterion, i.e., the overall query runtime. Thus, they cannot provide a fine-grained evaluation of the systems. We address these drawbacks by presenting FEASIBLE, an automatic approach for the generation of benchmarks out of the query history of applications, i.e., query logs and LargeRDFBench, a billion-triple benchmark for SPARQL query federation which encompasses real data as well as real queries pertaining to real bio-medical use cases. Our evaluation results show that HiBISCuS, TBSS, TopFed, DAW, and SAFE all can significantly reduce the total number of sources selected and thus improve the overall query performance. In particular, TBSS is the first source selection approach to remain under 5% overall relevant sources overestimation. Quetsal has reduced the number of sources selected (without losing recall), the source selection time as well as the overall query runtime as compared to state-of-the-art federation engines. The LargeRDFBench evaluation results suggests that the performance of current SPARQL query federation systems on simple queries does not reflect the systems\\\'' performance on more complex queries. Moreover, current federation systems seem unable to deal with many of the challenges that await them in the age of Big Data. Finally, the FEASIBLE\\\''s evaluation results shows that it generates better sample queries than the state-of-the-art. In addition, the better query selection and the larger set of query types used lead to triple store rankings which partly differ from the rankings generated by previous works
    corecore