32 research outputs found

    BigDimETL with NoSQL Database

    Get PDF
    In the last decade, we have witnessed an explosion of data volume available on the Web. This is due to the rapid technological advances with the availability of smart devices and social networks such as Twitter, Facebook, Instagram, etc. Hence, the concept of Big Data was created to face this constant increase. In this context, many domains should take in consideration this growth of data, especially, the Business Intelligence (BI) domain. Where, it is full of important knowledge that is crucial for effective decision making. However, new problems and challenges have appeared for the Decision Support System that must be addressed. Accordingly, the purpose of this paper is to adapt Extract-Transform-Load (ETL) processes with Big Data technologies, in order to support decision-making and knowledge discovery. In this paper, we propose a new approach called Big Dimensional ETL (BigDimETL) dealing with ETL development process and taking into account the Multidimensional structure. In addition, in order to accelerate data handling we used the MapReduce paradigm and Hbase as a distributed storage mechanism that provides data warehousing capabilities. Experimental results show that our ETL operation adaptation can perform well especially with Join operation

    Optimization of Columnar NoSQL Data Warehouse Model with Clarans Clustering Algorithm

    Get PDF
    In order to perfectly meet the needs of business leaders, decision-makers have resorted to the integration of external sources (such as Linked Open Data) in the decision-making system in order to enrich their existing data warehouses with new concepts contributing to bring added value to their organizations, enhance its productivity and retain its customers. However, the traditional data warehouse environment is not suitable to support external Big Data. To deal with this new challenge, several researches are oriented towards the direct conversion of classical relational data warehouse to a columnar NoSQL data warehouse, whereas the existing advanced works based on clustering algorithms are very limited and have several shortcomings. In this context, our paper proposes a new solution that conceives an optimized columnar data warehouse based on CLARANS clustering algorithm that has proven its effectiveness in generating optimal column families. Experimental results improve the validity of our system by performing a detailed comparative study between the existing advanced approaches and our proposed optimized method

    EXODuS: Exploratory OLAP over Document Stores

    Get PDF
    OLAP has been extensively used for a couple of decades as a data analysis approach to support decision making on enterprise structured data. Now, with the wide diffusion of NoSQL databases holding semi-structured data, there is a growing need for enabling OLAP on document stores as well, to allow non-expert users to get new insights and make better decisions. Unfortunately, due to their schemaless nature, document stores are hardly accessible via direct OLAP querying. In this paper we propose EXODuS, an interactive, schema-on-read approach to enable OLAP querying of document stores in the context of self-service BI and exploratory OLAP. To discover multidimensional hierarchies in document stores we adopt a data-driven approach based on the mining of approximate functional dependencies; to ensure good performances, we incrementally build local portions of hierarchies for the levels involved in the current user query. Users execute an analysis session by expressing well-formed multidimensional queries related by OLAP operations; these queries are then translated into the native query language of MongoDB, one of the most popular document-based DBMS. An experimental evaluation on real-world datasets shows the efficiency of our approach and its compatibility with a real-time setting

    Pervasive data science applied to the society of services

    Get PDF
    Dissertação de mestrado integrado em Information Systems Engineering and ManagementWith the technological progress that has been happening in the last few years, and now with the actual implementation of the Internet of Things concept, it is possible to observe an enormous amount of data being collected each minute. Well, this brings along a problem: “How can we process such amount of data in order to extract relevant knowledge in useful time?”. That’s not an easy issue to solve, because most of the time one needs to deal not just with tons but also with different kinds of data, which makes the problem even more complex. Today, and in an increasing way, huge quantities of the most varied types of data are produced. These data alone do not add value to the organizations that collect them, but when subjected to data analytics processes, they can be converted into crucial information sources in the core business. Therefore, the focus of this project is to explore this problem and try to give it a modular solution, adaptable to different realities, using recent technologies and one that allows users to access information where and whenever they wish. In the first phase of this dissertation, bibliographic research, along with a review of the same sources, was carried out in order to realize which kind of solutions already exists and also to try to solve the remaining questions. After this first work, a solution was developed, which is composed by four layers, and consists in getting the data to submit it to a treatment process (where eleven treatment functions are included to actually fulfill the multidimensional data model previously designed); and then an OLAP layer, which suits not just structured data but unstructured data as well, was constructed. In the end, it is possible to consult a set of four dashboards (available on a web application) based on more than twenty basic queries and that allows filtering data with a dynamic query. For this case study, and as proof of concept, the company IOTech was used, a company that provides the data needed to accomplish this dissertation, and based on which five Key Performance Indicators were defined. During this project two different methodologies were applied: Design Science Research, in the research field, and SCRUM, in the practical component.Com o avanço tecnológico que se tem vindo a notar nos últimos anos e, atualmente, com a implementação do conceito Internet of Things, é possível observar o enorme crescimento dos volumes de dados recolhidos a cada minuto. Esta realidade levanta uma problemática: “Como podemos processar grandes volumes dados e extrair conhecimento a partir deles em tempo útil?”. Este não é um problema fácil de resolver pois muitas vezes não estamos a lidar apenas com grandes volumes de dados, mas também com diferentes tipos dos mesmos, o que torna a problemática ainda mais complexa. Atualmente, grandes quantidades dos mais variados tipos de dados são geradas. Estes dados por si só não acrescentam qualquer valor às organizações que os recolhem. Porém, quando submetidos a processos de análise, podem ser convertidos em fontes de informação cruciais no centro do negócio. Assim sendo, o foco deste projeto é explorar esta problemática e tentar atribuir-lhe uma solução modular e adaptável a diferentes realidades, com base em tecnologias atuais que permitam ao utilizador aceder à informação onde e quando quiser. Na primeira fase desta dissertação, foi executada uma pesquisa bibliográfica, assim como, uma revisão da literatura recolhida nessas mesmas fontes, a fim de compreender que soluções já foram propostas e quais são as questões que requerem uma resposta. Numa segunda fase, foi desenvolvida uma solução, composta por quatro modulos, que passa por submeter os dados a um processo de tratamento (onde estão incluídas onze funções de tratamento, com o objetivo de preencher o modelo multidimensional previamente desenhado) e, posteriormente, desenvolver uma camada OLAP que seja capaz de lidar não só com dados estruturados, mas também dados não estruturados. No final, é possível consultar um conjunto de quatro dashboards disponibilizados numa plataforma web que tem como base mais de vinte queries iniciais, e filtros com base numa query dinamica. Para este caso de estudo e como prova de conceito foi utilizada a empresa IOTech, empresa que disponibilizará os dados necessários para suportar esta dissertação, e com base nos quais foram definidos cinco Key Performance Indicators. Durante este projeto foram aplicadas diferentes metodologias: Design Science Research, no que diz respeito à pesquisa, e SCRUM, no que diz respeito à componente prática

    Infraestrutura para análise de tráfego e comportamento de condutores

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaO trabalho realizado nesta dissertação pode ser visto como um sistema de apoio à decisão para tráfego. Foi motivado pelos projetos smart cities dos quais os transportes são uma área importante. Com a evolução das tecnologias nas viaturas é possível fazer uma recolha de cada vez mais informação sobre veículos num ambiente real, permitindo assim fazer uma análise mais detalhada sobre o tráfego e comportamento dos condutores. A pesquisa efetuada sobre trabalho relacionado nesta área revelou que muitas das análises efetuadas não tem em consideração o contexto sendo que alguns estudos apontavam integrar fatores influentes na condução como trabalho futuro. Nesta dissertação os conceitos do trabalho relacionado são integrados assim como fontes de dados heterogénias com informação sobre o contexto. Foi também feito um estudo sobre diferentes paradigmas de bases de dados, onde foram estudados os principais paradigmas NoSQL, os seus casos de uso e as sua principais implementações. Esta dissertação tem como objetivo propor o desenho e a implementação de uma infraestrutura para análise de tráfego e comportamento de condutores a partir de dados sobre trajetórias obtidos de viaturas em circulação. Para a prova de conceito, foram efetuados dois casos de estudo com dados extraidos de duas fontes distintas. Um conjunto de ferramentas de extração, transformação e carregamento de dados foi criado para alimentar os data marts desenvolvidos. Ferramentas de visualização foram usadas de modo a poder fazer uma análise visual através de gráficos para as medidas agregadas e software sistemas de informação geográficos para os detalhes espaciais. Esta infraestrutura foi desenhada de modo a poder ser adaptada para diferentes casos de uso da área, desde gestão de transportes públicos até seguros com base em comportamento. Os resultados obtidos permitem estudar o comportamento dos condutores de modo a obter conhecimento nesta área e possivelmente melhorar o tráfego ou a experiência de condução.The work in this dissertation can be seen as a traffic decision support system. It was motivated for the smart cities project which transportation are a major area. With the technology evolution on vehicles it is possible to gather even more information about vehicles in a real scenario, this allows to perform a more detailed analysis about traffic and drivers’ behavior. The research done about related work in this area showed that a lot of the analysis performed did not have into consideration the context, some of this studies even proposed to integrate factors that influence the driving experience in the future. In this dissertation the concepts of the related work are integrated as well as heterogeneous data sources with context information. It was also performed a study about different database paradigms, in which were studied the most relevant NoSQL paradigms, their use cases and most used implementations. This dissertation proposes the design and implementation of a framework for traffic data analysis and drivers’ behavior based on trajectory data gathered from moving vehicles. For the proof of concept, it was performed two different case studies with data extracted from two distinct datasets with vehicles trajectories. A set of tools was developed to extract, transform and load data to the data marts developed. Visualization tools were used in order to perform a visual analysis through charts for aggregate measures and GIS software for the geospatial details. This framework was designed to be adaptable for different application scenarios involving moving vehicles, from public transportation management to behavior based insurance. The achieved results allows the study of traffic and drivers’ behavior in order to obtain knowledge in this area and possibly improve traffic management or the driving experience

    A New Big Data Benchmark for OLAP Cube Design Using Data Pre-Aggregation Techniques

    Get PDF
    In recent years, several new technologies have enabled OLAP processing over Big Data sources. Among these technologies, we highlight those that allow data pre-aggregation because of their demonstrated performance in data querying. This is the case of Apache Kylin, a Hadoop based technology that supports sub-second queries over fact tables with billions of rows combined with ultra high cardinality dimensions. However, taking advantage of data pre-aggregation techniques to designing analytic models for Big Data OLAP is not a trivial task. It requires very advanced knowledge of the underlying technologies and user querying patterns. A wrong design of the OLAP cube alters significantly several key performance metrics, including: (i) the analytic capabilities of the cube (time and ability to provide an answer to a query), (ii) size of the OLAP cube, and (iii) time required to build the OLAP cube. Therefore, in this paper we (i) propose a benchmark to aid Big Data OLAP designers to choose the most suitable cube design for their goals, (ii) we identify and describe the main requirements and trade-offs for effectively designing a Big Data OLAP cube taking advantage of data pre-aggregation techniques, and (iii) we validate our benchmark in a case study.This work has been funded by the ECLIPSE project (RTI2018-094283-B-C32) from the Spanish Ministry of Science, Innovation and Universities

    Uma proposta de arquitetura NoLAP para um sistema de apoio à decisão acadêmico

    Get PDF
    Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2020.Este trabalho tem por objetivo apresentar uma proposta de migração da arquitetura do Data Warehouse de dados acadêmicos da Universidade de Brasília devenvolvido em bancos de dados relacional, conhecido na literatura como arquitetura ROLAP, para uma abordagem em bancos de dados NoSQL, mais precisamente para bancos de dados NoSQL de família de colunas. As abordagens consideradas nesse trabalho levaram em consideração o que se observou de mais relevante no estado da arte da literatura relacionada ao tema, como migrações de sistemas de Data Warehouse para os bancos de dados de família de colunas, como HBase, por exemplo, em conjunto com soluções para o processamento de grande volumes de dados em um cluster de servidores, como Apache Hadoop. Essa migração parte da necessidade de serem realizados estudos de novos paradigmas de arquiteturas para o Sistema de Apoio à Decisão Acadêmico face à nova realidade dos problemas que surgiram pelo crescimento massivo do volume de dados gerados pelos sistemas de informação da Universidade de Brasília - UnB.This research aims to propose a migration project to the Data Warehouse of the University of Brasilia - UnB , developed in a relational database, that is, ROLAP architecture to a column family NoSQL database architecture. The approache envolved in this work was considered by researches from the literature state of the art about migrations from Data Warehouse systems to column family databases, such as HBase, with solutions for large volumes of data processing on a server cluster, such as Apache Hadoop. This migration project emerged from the need of studying new architectural paradigms for the Academic Data Warehouse due to problems raised by massive growth of data volume stored in the University of Brasilia relational databases

    Business intelligence to support NOVA IMS academic services BI system

    Get PDF
    Project Work presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business IntelligenceKimball argues that Business Intelligence is one of the most important assets of any organization, allowing it to store, explore and add value to the organization’s data which will ultimately help in the decision making process. Nowadays, some organizations and, in this specific case, some schools are not yet transforming data into their full potential and business intelligence is one of the most known tools to help schools in this issue, seen as some of them are still using out-dated information systems, and do not yet apply business intelligence techniques to their increasing amounts of data so as to turn it into useful information and knowledge. In the present report, I intend to analyse the current NOVA IMS academic services data and the rationales behind the need to work with this data, so as to propose a solution that will ultimately help the school board or the academic services to make better-supported decisions. In order to do so, it was developed a Data Warehouse that will clean and transform the source database. Another important step to help the academic services is to present a series of reports to discover information in the decision making process

    E‐ARK Dissemination Information Package (DIP) Final Specification

    Get PDF
    The primary aim of this report is to present the final version of the E-ARK Dissemination Information Package (DIP) formats. The secondary aim is to describe the access scenarios in which these DIP formats will be rendered for use

    A Business Intelligence Solution, based on a Big Data Architecture, for processing and analyzing the World Bank data

    Get PDF
    The rapid growth in data volume and complexity has needed the adoption of advanced technologies to extract valuable insights for decision-making. This project aims to address this need by developing a comprehensive framework that combines Big Data processing, analytics, and visualization techniques to enable effective analysis of World Bank data. The problem addressed in this study is the need for a scalable and efficient Business Intelligence solution that can handle the vast amounts of data generated by the World Bank. Therefore, a Big Data architecture is implemented on a real use case for the International Bank of Reconstruction and Development. The findings of this project demonstrate the effectiveness of the proposed solution. Through the integration of Apache Spark and Apache Hive, data is processed using Extract, Transform and Load techniques, allowing for efficient data preparation. The use of Apache Kylin enables the construction of a multidimensional model, facilitating fast and interactive queries on the data. Moreover, data visualization techniques are employed to create intuitive and informative visual representations of the analysed data. The key conclusions drawn from this project highlight the advantages of a Big Data-driven Business Intelligence solution in processing and analysing World Bank data. The implemented framework showcases improved scalability, performance, and flexibility compared to traditional approaches. In conclusion, this bachelor thesis presents a Business Intelligence solution based on a Big Data architecture for processing and analysing the World Bank data. The project findings emphasize the importance of scalable and efficient data processing techniques, multidimensional modelling, and data visualization for deriving valuable insights. The application of these techniques contributes to the field by demonstrating the potential of Big Data Business Intelligence solutions in addressing the challenges associated with large-scale data analysis
    corecore