6 research outputs found

    Social Network Analysis of Cryptocurrency using Business Intelligence Dashboard

    Get PDF
    There are currently more than 10.000 cryptocurrencies available to buy from the online market, with a vast range of prices for each coin it sells. The fluctuation of each coin is affected by any social events or by several important companies or people behind it. The aim of this research is to compare three cryptocurrencies, which are Bitcoin, Ethereum, and Binance Coin, using Social Network Analysis (SNA) by visualizing them using Business Intelligence (BI Dashboard). This study uses the SNA parameters of degree, diameter, modularity, centrality, and path length for each network and its actors and their actual market price by crawling(data collecting process) from Twitter as one of the social media platforms. From the research conducted, the popularity of cryptocurrencies is affected by their market price and the activeness of their actors on social media. These results are important because they could help in the decision-making to buy cryptocurrencies with high popularity on social media because they tend to retain their value over time and could benefit from price spikes from influential people. Doi: 10.28991/HIJ-2022-03-02-09 Full Text: PD

    Challenging SQL-on-Hadoop performance with Apache Druid

    Get PDF
    In Big Data, SQL-on-Hadoop tools usually provide satisfactory performance for processing vast amounts of data, although new emerging tools may be an alternative. This paper evaluates if Apache Druid, an innovative column-oriented data store suited for online analytical processing workloads, is an alternative to some of the well-known SQL-on-Hadoop technologies and its potential in this role. In this evaluation, Druid, Hive and Presto are benchmarked with increasing data volumes. The results point Druid as a strong alternative, achieving better performance than Hive and Presto, and show the potential of integrating Hive and Druid, enhancing the potentialities of both tools.This work is supported by COMPETE: POCI-01-0145-FEDER-007043 and FCT - Fundacao para a Ciencia e Tecnologia within Project UID/CEC/00319/2013 and by European Structural and Investment Funds in the FEDER component, COMPETE 2020 (Funding Reference: POCI-01-0247-FEDER-002814)

    Intelligent event broker: a complex event processing system in big data contexts

    Get PDF
    In Big Data contexts, many batch and streaming oriented technologies have emerged to deal with the high valuable sources of events, such as Internet of Things (IoT) platforms, the Web, several types of databases, among others. The huge amount of heterogeneous data being constantly generated by a world of interconnected things and the need for (semi)-automated decision-making processes through Complex Event Processing (CEP) and Machine Learning (ML) have raised the need for innovative architectures capable of processing events in a streamlined, scalable, analytical, and integrated way. This paper presents the Intelligent Event Broker, a CEP system built upon flexible and scalable Big Data techniques and technologies, highlighting its system architecture, software packages, and classes. A demonstration case in Bosch’s Industry 4.0 context is presented, detailing how the system can be used to manage and improve the quality of the manufacturing process, showing its usefulness for solving real-world event-oriented problems.This work has been supported by FCT –Fundação para a Ciência e Tecnologiawithin the Project Scope: UID/CEC/00319/2019 and the Doctoral scholarship PD/BDE/135101/2017. This paper uses icons made by Freepik, from www.flaticon.com

    On the use of simulation as a Big Data semantic validator for supply chain management

    Get PDF
    Simulation stands out as an appropriate method for the Supply Chain Management (SCM) field. Nevertheless, to produce accurate simulations of Supply Chains (SCs), several business processes must be considered. Thus, when using real data in these simulation models, Big Data concepts and technologies become necessary, as the involved data sources generate data at increasing volume, velocity and variety, in what is known as a Big Data context. While developing such solution, several data issues were found, with simulation proving to be more efficient than traditional data profiling techniques in identifying them. Thus, this paper proposes the use of simulation as a semantic validator of the data, proposed a classification for such issues and quantified their impact in the volume of data used in the final achieved solution. This paper concluded that, while SC simulations using Big Data concepts and technologies are within the grasp of organizations, their data models still require considerable improvements, in order to produce perfect mimics of their SCs. In fact, it was also found that simulation can help in identifying and bypassing some of these issues.This work has been supported by FCT (Fundacao para a Ciencia e Tecnologia) within the Project Scope: UID/CEC/00319/2019 and by the Doctoral scholarship PDE/BDE/114566/2016 funded by FCT, the Portuguese Ministry of Science, Technology and Higher Education, through national funds, and co-financed by the European Social Fund (ESF) through the Operational Programme for Human Capital (POCH)

    A Big Data perspective on Cyber-Physical Systems for Industry 4.0: modernizing and scaling complex event processing

    Get PDF
    Doctoral program in Advanced Engineering Systems for IndustryNowadays, the whole industry makes efforts to find the most productive ways of working and it already understood that using the data that is being produced inside and outside the factories is a way to improve the business performance. A set of modern technologies combined with sensor-based communication create the possibility to act according to our needs, precisely at the moment when the data is being produced and processed. Considering the diversity of processes existing in a factory, all of them producing data, Complex Event Processing (CEP) with the capabilities to process that amount of data is needed in the daily work of a factory, to process different types of events and find patterns between them. Although the integration of the Big Data and Complex Event Processing topics is already present in the literature, open challenges in this area were identified, hence the reason for the contribution presented in this thesis. Thereby, this doctoral thesis proposes a system architecture that integrates the CEP concept with a rulebased approach in the Big Data context: the Intelligent Event Broker (IEB). This architecture proposes the use of adequate Big Data technologies in its several components. At the same time, some of the gaps identified in this area were fulfilled, complementing Event Processing with the possibility to use Machine Learning Models that can be integrated in the rules' verification, and also proposing an innovative monitoring system with an immersive visualization component to monitor the IEB and prevent its uncontrolled growth, since there are always several processes inside a factory that can be integrated in the system. The proposed architecture was validated with a demonstration case using, as an example, the Active Lot Release Bosch's system. This demonstration case revealed that it is feasible to implement the proposed architecture and proved the adequate functioning of the IEB system to process Bosch's business processes data and also to monitor its components and the events flowing through those components.Hoje em dia as indústrias esforçam-se para encontrar formas de serem mais produtivas. A utilização dos dados que são produzidos dentro e fora das fábricas já foi identificada como uma forma de melhorar o desempenho do negócio. Um conjunto de tecnologias atuais combinado com a comunicação baseada em sensores cria a possibilidade de se atuar precisamente no momento em que os dados estão a ser produzidos e processados, assegurando resposta às necessidades do negócio. Considerando a diversidade de processos que existem e produzem dados numa fábrica, as capacidades do Processamento de Eventos Complexos (CEP) revelam-se necessárias no quotidiano de uma fábrica, processando diferentes tipos de eventos e encontrando padrões entre os mesmos. Apesar da integração do conceito CEP na era de Big Data ser um tópico já presente na literatura, existem ainda desafios nesta área que foram identificados e que dão origem às contribuições presentes nesta tese. Assim, esta tese de doutoramento propõe uma arquitetura para um sistema que integre o conceito de CEP na era do Big Data, seguindo uma abordagem baseada em regras: o Intelligent Event Broker (IEB). Esta arquitetura propõe a utilização de tecnologias de Big Data que sejam adequadas aos seus diversos componentes. As lacunas identificadas na literatura foram consideradas, complementando o processamento de eventos com a possibilidade de utilizar modelos de Machine Learning com vista a serem integrados na verificação das regras, propondo também um sistema de monitorização inovador composto por um componente de visualização imersiva que permite monitorizar o IEB e prevenir o seu crescimento descontrolado, o que pode acontecer devido à integração do conjunto significativo de processos existentes numa fábrica. A arquitetura proposta foi validada através de um caso de demonstração que usou os dados do Active Lot Release, um sistema da Bosch. Os resultados revelaram a viabilidade da implementação da arquitetura e comprovaram o adequado funcionamento do sistema no que diz respeito ao processamento dos dados dos processos de negócio da Bosch e à monitorização dos componentes do IEB e eventos que fluem através desses.This work has been supported by FCT – Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/2020, the Doctoral scholarship PD/BDE/135101/2017 and by European Structural and Investment Funds in the FEDER component, through the Operational Competitiveness and Internationalization Programme (COMPETE 2020) [Project nº 039479; Funding Reference: POCI-01- 0247-FEDER-039479]

    Fast online analytical processing for Big Data warehousing

    No full text
    In an organizational context where data volume is continuously growing, Online Analytical Processing capabilities are necessary to ensure timely data processing for users that need interactive query processing to support the decision -making process. This paper benchmarks an innovative column -oriented distributed data store, Druid, evaluating its performance in interactive analytical workloads and verifying the impact that different data organizations strategies have in its performance. To achieve this goal, the well-known Star Schema Benchmark is used to verify the impact that the concepts of segments, query granularity and partitions or shards have in the space required to store the data and in the time needed to process it. The obtained results show that scenarios that use partitions usually achieve better processing times, even when that implies an increase in the needed storage space.This work is supported by COMPETE: POCI-01-0145-FEDER-007043 and FCT - Fundacdo para a Ciencia e Tecnologia within the Project Scope: UID/CEC/00319/2013 and the doctoral scholarship PD/BDE/135101/2017; by European Structural and Investment Funds in the FEDER component, through the Operational Competitiveness and Internationalization Programme (COMPETE 2020) [Project no 002814; Funding Reference: POCI-01-0247-FEDER-002814]
    corecore