34 research outputs found

    EasyBDI: integração automática de big data e consultas analíticas de alto nível

    Get PDF
    Abstract The emergence of new areas, such as the internet of things, which require access to the latest data for data analytics and decision-making environments, created constraints for the execution of analytical queries on traditional data warehouse architectures. In addition, the increase of semi-structure and unstructured data led to the creation of new databases to deal with these types of data, namely, NoSQL databases. This led to the information being stored in several different systems, each with more suitable characteristics for different use cases, which created difficulties in accessing data that are now spread across various systems with different models and characteristics. In this work, a system capable of performing analytical queries in real time on distributed and heterogeneous data sources is proposed: EasyBDI. The system is capable of integrating data logically, without materializing data, creating an overview of the data, thus offering an abstraction over the distribution and heterogeneity of data sources. Queries are executed interactively on data sources, which means that the most recent data will always be used in queries. This system presents a user interface that helps in the configuration of data sources, and automatically proposes a global schema that presents a generic and simplified view of the data, which can be modified by the user. The system allows the creation of multiple star schemas from the global schema. Finally, analytical queries are also made through a user interface that uses drag-and-drop elements. EasyBDI is able to solve recent problems by using recent solutions, hiding the details of several data sources, at the same time that allows users with less knowledge of databases to also be able to perform real-time analytical queries over distributed and heterogeneous data sources.O aparecimento de novas áreas, como a Internet das Coisas, que requerem o acesso aos dados mais recentes para ambientes de tomada de decisão, criou constrangimentos na execução de consultas analíticas usando as arquiteturas tradicionais de data warehouses. Adicionalmente, o aumento de dados semi-estruturados e não estruturados levou a que outras bases de dados fossem criadas para lidar com esse tipo de dados, nomeadamente bases NoSQL. Isto levou a que a informação seja armazenada em sistemas com características distintas e especializados em diferentes casos de uso, criando dificuldades no acesso aos dados que estão agora espalhados por vários sistemas com modelos e características distintas. Neste trabalho, propõe-se um sistema capaz de efetuar consultas analíticas em tempo real sobre fontes de dados distribuídas e heterogéneas: o EasyBDI. O sistema é capaz de integrar dados logicamente, sem materializar os dados, criando uma vista geral dos dados que oferece uma abstração sobre a distribuição e heterogeneidade das fontes de dados. As consultas são executadas interativamente nas fontes de dados, o que significa que os dados mais recentes serão sempre usados nas consultas. Este sistema apresenta uma interface de utilizador que ajuda na configuração de fontes de dados, e propõe automaticamente um esquema global que apresenta a vista genérica e simplificada dos dados, podendo ser modificado pelo utilizador. O sistema permite a criação de múltiplos esquema em estrela a partir do esquema global. Por fim, a realização de consultas analíticas é feita também através de uma interface de utilizador que recorre ao drag-and-drop de elementos. O EasyBDI é capaz de resolver problemas recentes, utilizando também soluções recentes, escondendo os detalhes de diversas fontes de dados, ao mesmo tempo que permite que utilizadores com menos conhecimentos em bases de dados possam também realizar consultas analíticas em tempo-real sobre fontes de dados distribuídas e heterogéneas.Mestrado em Engenharia Informátic

    Incorporation of ontologies in data warehouse/business intelligence systems - A systematic literature review

    Get PDF
    Semantic Web (SW) techniques, such as ontologies, are used in Information Systems (IS) to cope with the growing need for sharing and reusing data and knowledge in various research areas. Despite the increasing emphasis on unstructured data analysis in IS, structured data and its analysis remain critical for organizational performance management. This systematic literature review aims at analyzing the incorporation and impact of ontologies in Data Warehouse/Business Intelligence (DW/BI) systems, contributing to the current literature by providing a classification of works based on the field of each case study, SW techniques used, and the authors’ motivations for using them, with a focus on DW/BI design, development and exploration tasks. A search strategy was developed, including the definition of keywords, inclusion and exclusion criteria, and the selection of search engines. Ontologies are mainly defined using the Ontology Web Language standard to support multiple DW/BI tasks, such as Dimensional Modeling, Requirement Analysis, Extract-Transform-Load, and BI Application Design. Reviewed authors present a variety of motivations for ontology-driven solutions in DW/BI, such as eliminating or solving data heterogeneity/semantics problems, increasing interoperability, facilitating integration, or providing semantic content for requirements and data analysis. Further, implications for practice and research agenda are indicated.info:eu-repo/semantics/publishedVersio

    Beiträge zu Business Intelligence und IT-Compliance

    Get PDF
    [no abstract

    A abordagem POESIA para a integração de dados e serviços na Web semantica

    Get PDF
    Orientador: Claudia Bauzer MedeirosTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: POESIA (Processes for Open-Ended Systems for lnformation Analysis), a abordagem proposta neste trabalho, visa a construção de processos complexos envolvendo integração e análise de dados de diversas fontes, particularmente em aplicações científicas. A abordagem é centrada em dois tipos de mecanismos da Web semântica: workflows científicos, para especificar e compor serviços Web; e ontologias de domínio, para viabilizar a interoperabilidade e o gerenciamento semânticos dos dados e processos. As principais contribuições desta tese são: (i) um arcabouço teórico para a descrição, localização e composição de dados e serviços na Web, com regras para verificar a consistência semântica de composições desses recursos; (ii) métodos baseados em ontologias de domínio para auxiliar a integração de dados e estimar a proveniência de dados em processos cooperativos na Web; (iii) implementação e validação parcial das propostas, em urna aplicação real no domínio de planejamento agrícola, analisando os benefícios e as limitações de eficiência e escalabilidade da tecnologia atual da Web semântica, face a grandes volumes de dadosAbstract: POESIA (Processes for Open-Ended Systems for Information Analysis), the approach proposed in this work, supports the construction of complex processes that involve the integration and analysis of data from several sources, particularly in scientific applications. This approach is centered in two types of semantic Web mechanisms: scientific workflows, to specify and compose Web services; and domain ontologies, to enable semantic interoperability and management of data and processes. The main contributions of this thesis are: (i) a theoretical framework to describe, discover and compose data and services on the Web, inc1uding mIes to check the semantic consistency of resource compositions; (ii) ontology-based methods to help data integration and estimate data provenance in cooperative processes on the Web; (iii) partial implementation and validation of the proposal, in a real application for the domain of agricultural planning, analyzing the benefits and scalability problems of the current semantic Web technology, when faced with large volumes of dataDoutoradoCiência da ComputaçãoDoutor em Ciência da Computaçã

    Acquisition and Declarative Analytical Processing of Spatio-Temporal Observation Data

    Get PDF
    A generic framework for spatio-temporal observation data acquisition and declarative analytical processing has been designed and implemented in this Thesis. The main contributions of this Thesis may be summarized as follows: 1) generalization of a data acquisition and dissemination server, with great applicability in many scientific and industrial domains, providing flexibility in the incorporation of different technologies for data acquisition, data persistence and data dissemination, 2) definition of a new hybrid logical-functional paradigm to formalize a novel data model for the integrated management of entity and sampled data, 3) definition of a novel spatio-temporal declarative data analysis language for the previous data model, 4) definition of a data warehouse data model supporting observation data semantics, including application of the above language to the declarative definition of observation processes executed during observation data load, and 5) column-oriented parallel and distributed implementation of the spatial analysis declarative language. The huge amount of data to be processed forces the exploitation of current multi-core hardware architectures and multi-node cluster infrastructures

    Enhancing Data Warehouse management through semi-automatic data integration and complex graph generation

    Get PDF
    2013 - 2014Strategic information is one of the main assets for many organizations and, in the next future, it will become increasingly more important to enable the decisionmakers answer questions about their business, such as how to increase their profitability. A proper decision-making process is benefited by information that is frequently scattered among several heterogeneous databases. Such databases may come from several organization systems and even from external sources. As a result, organization managers have to deal with the issue of integrating several databases from independent data sources containing semantic differences and no specific or canonical concept description. Data Warehouse Systems were born to integrate such kind of heterogeneous data in order to be successively extracted and analyzed according to the manager’s needs and business plans. Besides being difficult and onerous to design, integrate and build, Data Warehouse Systems present another issue related to the difficulty to represent multidimensional information typical of the result of OLAP operations, such as aggregations on data cubes, extraction of sub-cubes or rotations of the data axis, through easy to understand views... [edited by author]XIII n.s

    Just-in-time Analytics Over Heterogeneous Data and Hardware

    Get PDF
    Industry and academia are continuously becoming more data-driven and data-intensive, relying on the analysis of a wide variety of datasets to gain insights. At the same time, data variety increases continuously across multiple axes. First, data comes in multiple formats, such as the binary tabular data of a DBMS, raw textual files, and domain-specific formats. Second, different datasets follow different data models, such as the relational and the hierarchical one. Data location also varies: Some datasets reside in a central "data lake", whereas others lie in remote data sources. In addition, users execute widely different analysis tasks over all these data types. Finally, the process of gathering and integrating diverse datasets introduces several inconsistencies and redundancies in the data, such as duplicate entries for the same real-world concept. In summary, heterogeneity significantly affects the way data analysis is performed. In this thesis, we aim for data virtualization: Abstracting data out of its original form and manipulating it regardless of the way it is stored or structured, without a performance penalty. To achieve data virtualization, we design and implement systems that i) mask heterogeneity through the use of heterogeneity-aware, high-level building blocks and ii) offer fast responses through on-demand adaptation techniques. Regarding the high-level building blocks, we use a query language and algebra to handle multiple collection types, such as relations and hierarchies, express transformations between these collection types, as well as express complex data cleaning tasks over them. In addition, we design a location-aware compiler and optimizer that masks away the complexity of accessing multiple remote data sources. Regarding on-demand adaptation, we present a design to produce a new system per query. The design uses customization mechanisms that trigger runtime code generation to mimic the system most appropriate to answer a query fast: Query operators are thus created based on the query workload and the underlying data models; the data access layer is created based on the underlying data formats. In addition, we exploit emerging hardware by customizing the system implementation based on the available heterogeneous processors â CPUs and GPGPUs. We thus pair each workload with its ideal processor type. The end result is a just-in-time database system that is specific to the query, data, workload, and hardware instance. This thesis redesigns the data management stack to natively cater for data heterogeneity and exploit hardware heterogeneity. Instead of centralizing all relevant datasets, converting them to a single representation, and loading them in a monolithic, static, suboptimal system, our design embraces heterogeneity. Overall, our design decouples the type of performed analysis from the original data layout; users can perform their analysis across data stores, data models, and data formats, but at the same time experience the performance offered by a custom system that has been built on demand to serve their specific use case

    Designing Cross-Company Business Intelligence Networks

    Get PDF
    Business Intelligence (BI) ist der allgemein akzeptierte Begriff für Methoden, Konzepte und Werkzeuge zur Sammlung, Aufbereitung, Speicherung, Verteilung und Analyse von Daten für Management- und Geschäftsentscheidungen. Obwohl unternehmensübergreifende Kooperation in den vergangenen Jahrzehnten stets an Einfluss gewonnen hat, existieren nur wenige Forschungsergebnisse im Bereich unternehmensübergreifender BI. Die vorliegende Arbeit stellt eine Arbeitsdefinition des Begriffs Cross-Company BI (CCBI) vor und grenzt diesen von gemeinschaftlicher Entscheidungsfindung ab. Auf Basis eines Referenzmodells, das existierende Arbeiten und Ansätze verwandter Forschungsbereiche berücksichtigt, werden umfangreiche Simulationen und Parametertests unternehmensübergreifender BI-Netzwerke durchgeführt. Es wird gezeigt, dass eine Peer-To-Peer-basierte Gestaltung der Netzwerke leistungsfähig und kompetitiv zu existierenden zentral-fokussierten Ansätzen ist. Zur Quantifizierung der Beobachtungen werden Messgrößen geprüft, die sich aus existierenden Konzepten zur Schemaüberführung multidimensionaler Daten sowie Überlegungen zur Daten- und Informationsqualität ableiten oder entwickeln lassen.Business Intelligence (BI) is a well-established term for methods, concepts and tools to retrieve, store, deliver and analyze data for management and business purposes. Although collaboration across company borders has substantially increased over the past decades, little research has been conducted specifically on Cross-Company BI (CCBI). In this thesis, a working definition and distinction from general collaborative decision making is proposed. Based on a reference model that takes existing research and related approaches of adjacent fields into account a peer-to-peer network design is created. With an extensive simulation and parameter testing it is shown that the design proves valuable and competitive to centralized approaches and that obtaining a critical mass of participants leads to improved usefulness of the network. To quantify the observations, appropriate quality measures rigorously derived from respected concepts on data and information quality and multidimensional data models are introduced and validated

    From cluster databases to cloud storage: Providing transactional support on the cloud

    Get PDF
    Durant les últimes tres dècades, les limitacions tecnològiques (com per exemple la capacitat dels dispositius d'emmagatzematge o l'ample de banda de les xarxes de comunicació) i les creixents demandes dels usuaris (estructures d'informació, volums de dades) han conduït l'evolució de les bases de dades distribuïdes. Des dels primers repositoris de dades per arxius plans que es van desenvolupar en la dècada dels vuitanta, s'han produït importants avenços en els algoritmes de control de concurrència, protocols de replicació i en la gestió de transaccions. No obstant això, els reptes moderns d'emmagatzematge de dades que plantegen el Big Data i el cloud computing—orientats a millorar la limitacions pel que fa a escalabilitat i elasticitat de les bases de dades estàtiques—estan empenyent als professionals a relaxar algunes propietats importants dels sistemes transaccionals clàssics, cosa que exclou a diverses aplicacions les quals no poden encaixar en aquesta estratègia degut a la seva alta dependència transaccional. El propòsit d'aquesta tesi és abordar dos reptes importants encara latents en el camp de les bases de dades distribuïdes: (1) les limitacions pel que fa a escalabilitat dels sistemes transaccionals i (2) el suport transaccional en repositoris d'emmagatzematge en el núvol. Analitzar les tècniques tradicionals de control de concurrència i de replicació, utilitzades per les bases de dades clàssiques per suportar transaccions, és fonamental per identificar les raons que fan que aquests sistemes degradin el seu rendiment quan el nombre de nodes i / o quantitat de dades creix. A més, aquest anàlisi està orientat a justificar el disseny dels repositoris en el núvol que deliberadament han deixat de banda el suport transaccional. Efectivament, apropar el paradigma de l'emmagatzematge en el núvol a les aplicacions que tenen una forta dependència en les transaccions és fonamental per a la seva adaptació als requeriments actuals pel que fa a volums de dades i models de negoci. Aquesta tesi comença amb la proposta d'un simulador de protocols per a bases de dades distribuïdes estàtiques, el qual serveix com a base per a la revisió i comparativa de rendiment dels protocols de control de concurrència i les tècniques de replicació existents. Pel que fa a la escalabilitat de les bases de dades i les transaccions, s'estudien els efectes que té executar diferents perfils de transacció sota diferents condicions. Aquesta anàlisi contínua amb una revisió dels repositoris d'emmagatzematge de dades en el núvol existents—que prometen encaixar en entorns dinàmics que requereixen alta escalabilitat i disponibilitat—, el qual permet avaluar els paràmetres i característiques que aquests sistemes han sacrificat per tal de complir les necessitats actuals pel que fa a emmagatzematge de dades a gran escala. Per explorar les possibilitats que ofereix el paradigma del cloud computing en un escenari real, es presenta el desenvolupament d'una arquitectura d'emmagatzematge de dades inspirada en el cloud computing la qual s’utilitza per emmagatzemar la informació generada en les Smart Grids. Concretament, es combinen les tècniques de replicació en bases de dades transaccionals i la propagació epidèmica amb els principis de disseny usats per construir els repositoris de dades en el núvol. Les lliçons recollides en l'estudi dels protocols de replicació i control de concurrència en el simulador de base de dades, juntament amb les experiències derivades del desenvolupament del repositori de dades per a les Smart Grids, desemboquen en el que hem batejat com Epidemia: una infraestructura d'emmagatzematge per Big Data concebuda per proporcionar suport transaccional en el núvol. A més d'heretar els beneficis dels repositoris en el núvol en quant a escalabilitat, Epidemia inclou una capa de gestió de transaccions que reenvia les transaccions dels clients a un conjunt jeràrquic de particions de dades, cosa que permet al sistema oferir diferents nivells de consistència i adaptar elàsticament la seva configuració a noves demandes de càrrega de treball. Finalment, els resultats experimentals posen de manifest la viabilitat de la nostra contribució i encoratgen als professionals a continuar treballant en aquesta àrea.Durante las últimas tres décadas, las limitaciones tecnológicas (por ejemplo la capacidad de los dispositivos de almacenamiento o el ancho de banda de las redes de comunicación) y las crecientes demandas de los usuarios (estructuras de información, volúmenes de datos) han conducido la evolución de las bases de datos distribuidas. Desde los primeros repositorios de datos para archivos planos que se desarrollaron en la década de los ochenta, se han producido importantes avances en los algoritmos de control de concurrencia, protocolos de replicación y en la gestión de transacciones. Sin embargo, los retos modernos de almacenamiento de datos que plantean el Big Data y el cloud computing—orientados a mejorar la limitaciones en cuanto a escalabilidad y elasticidad de las bases de datos estáticas—están empujando a los profesionales a relajar algunas propiedades importantes de los sistemas transaccionales clásicos, lo que excluye a varias aplicaciones las cuales no pueden encajar en esta estrategia debido a su alta dependencia transaccional. El propósito de esta tesis es abordar dos retos importantes todavía latentes en el campo de las bases de datos distribuidas: (1) las limitaciones en cuanto a escalabilidad de los sistemas transaccionales y (2) el soporte transaccional en repositorios de almacenamiento en la nube. Analizar las técnicas tradicionales de control de concurrencia y de replicación, utilizadas por las bases de datos clásicas para soportar transacciones, es fundamental para identificar las razones que hacen que estos sistemas degraden su rendimiento cuando el número de nodos y/o cantidad de datos crece. Además, este análisis está orientado a justificar el diseño de los repositorios en la nube que deliberadamente han dejado de lado el soporte transaccional. Efectivamente, acercar el paradigma del almacenamiento en la nube a las aplicaciones que tienen una fuerte dependencia en las transacciones es crucial para su adaptación a los requerimientos actuales en cuanto a volúmenes de datos y modelos de negocio. Esta tesis empieza con la propuesta de un simulador de protocolos para bases de datos distribuidas estáticas, el cual sirve como base para la revisión y comparativa de rendimiento de los protocolos de control de concurrencia y las técnicas de replicación existentes. En cuanto a la escalabilidad de las bases de datos y las transacciones, se estudian los efectos que tiene ejecutar distintos perfiles de transacción bajo diferentes condiciones. Este análisis continua con una revisión de los repositorios de almacenamiento en la nube existentes—que prometen encajar en entornos dinámicos que requieren alta escalabilidad y disponibilidad—, el cual permite evaluar los parámetros y características que estos sistemas han sacrificado con el fin de cumplir las necesidades actuales en cuanto a almacenamiento de datos a gran escala. Para explorar las posibilidades que ofrece el paradigma del cloud computing en un escenario real, se presenta el desarrollo de una arquitectura de almacenamiento de datos inspirada en el cloud computing para almacenar la información generada en las Smart Grids. Concretamente, se combinan las técnicas de replicación en bases de datos transaccionales y la propagación epidémica con los principios de diseño usados para construir los repositorios de datos en la nube. Las lecciones recogidas en el estudio de los protocolos de replicación y control de concurrencia en el simulador de base de datos, junto con las experiencias derivadas del desarrollo del repositorio de datos para las Smart Grids, desembocan en lo que hemos acuñado como Epidemia: una infraestructura de almacenamiento para Big Data concebida para proporcionar soporte transaccional en la nube. Además de heredar los beneficios de los repositorios en la nube altamente en cuanto a escalabilidad, Epidemia incluye una capa de gestión de transacciones que reenvía las transacciones de los clientes a un conjunto jerárquico de particiones de datos, lo que permite al sistema ofrecer distintos niveles de consistencia y adaptar elásticamente su configuración a nuevas demandas cargas de trabajo. Por último, los resultados experimentales ponen de manifiesto la viabilidad de nuestra contribución y alientan a los profesionales a continuar trabajando en esta área.Over the past three decades, technology constraints (e.g., capacity of storage devices, communication networks bandwidth) and an ever-increasing set of user demands (e.g., information structures, data volumes) have driven the evolution of distributed databases. Since flat-file data repositories developed in the early eighties, there have been important advances in concurrency control algorithms, replication protocols, and transactions management. However, modern concerns in data storage posed by Big Data and cloud computing—related to overcome the scalability and elasticity limitations of classic databases—are pushing practitioners to relax some important properties featured by transactions, which excludes several applications that are unable to fit in this strategy due to their intrinsic transactional nature. The purpose of this thesis is to address two important challenges still latent in distributed databases: (1) the scalability limitations of transactional databases and (2) providing transactional support on cloud-based storage repositories. Analyzing the traditional concurrency control and replication techniques, used by classic databases to support transactions, is critical to identify the reasons that make these systems degrade their throughput when the number of nodes and/or amount of data rockets. Besides, this analysis is devoted to justify the design rationale behind cloud repositories in which transactions have been generally neglected. Furthermore, enabling applications which are strongly dependent on transactions to take advantage of the cloud storage paradigm is crucial for their adaptation to current data demands and business models. This dissertation starts by proposing a custom protocol simulator for static distributed databases, which serves as a basis for revising and comparing the performance of existing concurrency control protocols and replication techniques. As this thesis is especially concerned with transactions, the effects on the database scalability of different transaction profiles under different conditions are studied. This analysis is followed by a review of existing cloud storage repositories—that claim to be highly dynamic, scalable, and available—, which leads to an evaluation of the parameters and features that these systems have sacrificed in order to meet current large-scale data storage demands. To further explore the possibilities of the cloud computing paradigm in a real-world scenario, a cloud-inspired approach to store data from Smart Grids is presented. More specifically, the proposed architecture combines classic database replication techniques and epidemic updates propagation with the design principles of cloud-based storage. The key insights collected when prototyping the replication and concurrency control protocols at the database simulator, together with the experiences derived from building a large-scale storage repository for Smart Grids, are wrapped up into what we have coined as Epidemia: a storage infrastructure conceived to provide transactional support on the cloud. In addition to inheriting the benefits of highly-scalable cloud repositories, Epidemia includes a transaction management layer that forwards client transactions to a hierarchical set of data partitions, which allows the system to offer different consistency levels and elastically adapt its configuration to incoming workloads. Finally, experimental results highlight the feasibility of our contribution and encourage practitioners to further research in this area

    Querying and managing complex networks

    Get PDF
    Orientador: André SantanchèTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Compreender e quantificar as propriedades emergentes de redes naturais e de redes construídas pelo homem, tais como cadeias alimentares, interações sociais e infra-estruturas de transporte é uma tarefa desafiadora. O campo de redes complexas foi desenvolvido para agregar medições, algoritmos e técnicas para lidar com tais tópicos. Embora as pesquisas em redes complexas tenham sido aplicadas com sucesso em várias áreas de atividade humana, ainda há uma falta de infra-estruturas comuns para tarefas rotineiras, especialmente aquelas relacionadas à gestão de dados. Por outro lado, o campo de bancos de dados tem se concentrado em questões de gestão de dados desde o seu início, há várias décadas. Sistemas de banco de dados, no entanto, oferecem suporte reduzido à análise de redes. Para prover um melhor suporte para tarefas de análise de redes complexas, um sistema de banco de dados deve oferecer recursos de consulta e gerenciamento de dados adequados. Esta tese defende uma maior integração entre as áreas e apresenta nossos esforços para atingir este objetivo. Aqui nós descrevemos o Sistema de Gerenciamento de Dados Complexos (CDMS), que permite consultas exploratórias sobre redes complexas através de uma linguagem de consulta declarativa. Os resultados da consulta são classificados com base em medições de rede avaliadas no momento da consulta. Para suportar o processamento de consultas, nós introduzimos a Beta-álgebra, que oferece um operador capaz de representar diversas medições típicas de análise de redes complexas. A álgebra oferece oportunidades para otimizações transparentes de consulta baseadas em reescritas, propostas e discutidas aqui. Também introduzimos o mecanismo mapper de gestão de relacionamentos, que está integrado à linguagem de consulta. Os mecanismos de consulta e gerenciamento de dados flexíveis propostos são também úteis em cenários além da análise de redes complexas. Nós demonstramos o uso do CDMS em aplicações tais como integração de dados institucionais, recuperação de informação, classificação e recomendação. Todos os aspectos da proposta foram implementadas e testados com dados reais e sintéticosAbstract: Understanding and quantifying the emergent properties of natural and man-made networks such as food webs, social interactions, and transportation infrastructures is a challenging task. The complex networks field was developed to encompass measurements, algorithms, and techniques to tackle such topics. Although complex networks research has been successfully applied to several areas of human activity, there is still a lack of common infrastructures for routine tasks, especially those related to data management. On the other hand, the databases field has focused on mastering data management issues since its beginnings, several decades ago. Database systems, however, offer limited network analysis capabilities. To enable a better support for complex network analysis tasks, a database system must offer adequate querying and data management capabilities. This thesis advocates for a tighter integration between the areas and presents our efforts towards this goal. Here we describe the Complex Data Management System (CDMS), which enables explorative querying of complex networks through a declarative query language. Query results are ranked based on network measurements assessed at query time. To support query processing, we introduce the Beta-algebra, which offers an operator capable of representing diverse measurements typical of complex network analysis. The algebra offers opportunities for transparent query optimization through query rewritings, proposed and discussed here. We also introduce the mapper mechanism for relationship management, which is integrated in the query language. The flexible query language and data management mechanisms are useful in scenarios other than complex network analysis. We demonstrate the use of the CDMS in applications such as institutional data integration, information retrieval, classification and recommendation. All aspects of the proposal are implemented and have been tested with real and synthetic dataDoutoradoCiência da ComputaçãoDoutor em Ciência da Computação2012/15988-9FAPESPCAPE
    corecore