133 research outputs found

    Evaluating partitioning and bucketing strategies for Hive-based Big Data Warehousing systems

    Get PDF
    Hive has long been one of the industry-leading systems for Data Warehousing in Big Data contexts, mainly organizing data into databases, tables, partitions and buckets, stored on top of an unstructured distributed file system like HDFS. Some studies were conducted for understanding the ways of optimizing the performance of several storage systems for Big Data Warehousing. However, few of them explore the impact of data organization strategies on query performance, when using Hive as the storage technology for implementing Big Data Warehousing systems. Therefore, this paper evaluates the impact of data partitioning and bucketing in Hive-based systems, testing different data organization strategies and verifying the efficiency of those strategies in query performance. The obtained results demonstrate the advantages of implementing Big Data Warehouses based on denormalized models and the potential benefit of using adequate partitioning strategies. Defining the partitions aligned with the attributes that are frequently used in the conditions/filters of the queries can significantly increase the efficiency of the system in terms of response time. In the more intensive workload benchmarked in this paper, overall decreases of about 40% in processing time were verified. The same is not verified with the use of bucketing strategies, which shows potential benefits in very specific scenarios, suggesting a more restricted use of this functionality, namely in the context of bucketing two tables by the join attribute of these tables.This work is supported by COMPETE: POCI-01-0145- FEDER-007043 and FCT—Fundação para a Ciência e Tecnologia within the Project Scope: UID/CEC/00319/2013, and by European Structural and Investment Funds in the FEDER com-ponent, through the Operational Competitiveness and Internationalization Programme (COMPETE 2020) [Project no. 002814; Funding Reference: POCI-01-0247-FEDER-002814]

    Supply chain hybrid simulation: From Big Data to distributions and approaches comparison

    Get PDF
    The uncertainty and variability of Supply Chains paves the way for simulation to be employed to mitigate such risks. Due to the amounts of data generated by the systems used to manage relevant Supply Chain processes, it is widely recognized that Big Data technologies may bring benefits to Supply Chain simulation models. Nevertheless, a simulation model should also consider statistical distributions, which allow it to be used for purposes such as testing risk scenarios or for prediction. However, when Supply Chains are complex and of huge-scale, performing distribution fitting may not be feasible, which often results in users focusing on subsets of problems or selecting samples of elements, such as suppliers or materials. This paper proposed a hybrid simulation model that runs using data stored in a Big Data Warehouse, statistical distributions or a combination of both approaches. The results show that the former approach brings benefits to the simulations and is essential when setting the model to run based on statistical distributions. Furthermore, this paper also compared these approaches, emphasizing the pros and cons of each, as well as their differences in computational requirements, hence establishing a milestone for future researches in this domain.This work has been supported by national funds through FCT -Fundacao para a Ciencia e Tecnologia within the Project Scope: UID/CEC/00319/2019 and by the Doctoral scholarship PDE/BDE/114566/2016 funded by FCT, the Portuguese Ministry of Science, Technology and Higher Education, through national funds, and co-financed by the European Social Fund (ESF) through the Operational Programme for Human Capital (POCH)

    Review of modern business intelligence and analytics in 2015: How to tame the big data in practice?: Case study - What kind of modern business intelligence and analytics strategy to choose?

    Get PDF
    The objective of this study was to find out the state of art architecture of modern business intelligence and analytics. Furthermore the status quo of business intelligence and analytics' architecture in an anonymous case company was examined. Based on these findings a future strategy was designed to guide the case company towards a better business intelligence and analytics environment. This objective was selected due to an increasing interest on big data topic. Thus the understanding on how to move on from traditional business intelligence practices to modern ones and what are the available options were seen as the key questions to be solved in order to gain competitive advantage for any company in near future. The study was conducted as a qualitative single-case study. The case study included two parts: an analytics maturity assessment, and an analysis of business intelligence and analytics' architecture. The survey included over 30 questions and was sent to 25 analysts and other individuals who were using a significant time to deal with or read financial reports like for example managers. The architecture analysis was conducted by gathering relevant information on high level. Furthermore a big picture was drawn to illustrate the architecture. The two parts combined were used to construct the actual current maturity level of business intelligence and analytics in the case company. Three theoretical frameworks were used: first framework regarding the architecture, second framework regarding the maturity level and third framework regarding reporting tools. The first higher level framework consisted of the modern data warehouse architecture and Hadoop solution from D'Antoni and Lopez (2014). The second framework included the analytics maturity assessment from the data warehouse institute (2015). Finally the third framework analyzed the advanced analytics tools from Sallam et al. (2015). The findings of this study suggest that modern business intelligence and analytics solution can include both data warehouse and Hadoop components. These two components are not mutually exclusive. Instead Hadoop is actually augmenting data warehouse to another level. This thesis shows how companies can evaluate their current maturity level and design a future strategy by benchmarking their own actions against the state of art solution. To keep up with the fast pace of development, research must be continuous. Therefore in future for example a study regarding a detailed path of implementing Hadoop would be a great addition to this field

    Advancing logistics 4.0 with the implementation of a big data warehouse: a demonstration case for the automotive industry

    Get PDF
    The constant advancements in Information Technology have been the main driver of the Big Data concept’s success. With it, new concepts such as Industry 4.0 and Logistics 4.0 are arising. Due to the increase in data volume, velocity, and variety, organizations are now looking to their data analytics infrastructures and searching for approaches to improve their decision-making capabilities, in order to enhance their results using new approaches such as Big Data and Machine Learning. The implementation of a Big Data Warehouse can be the first step to improve the organizations’ data analysis infrastructure and start retrieving value from the usage of Big Data technologies. Moving to Big Data technologies can provide several opportunities for organizations, such as the capability of analyzing an enormous quantity of data from different data sources in an efficient way. However, at the same time, different challenges can arise, including data quality, data management, and lack of knowledge within the organization, among others. In this work, we propose an approach that can be adopted in the logistics department of any organization in order to promote the Logistics 4.0 movement, while highlighting the main challenges and opportunities associated with the development and implementation of a Big Data Warehouse in a real demonstration case at a multinational automotive organization.This work was supported by FCT–Fundação para a Ciência e Tecnologia—within the R&D Units Project Scope: UIDB/00319/2020 and doctoral scholarship grants: PD/BDE/142895/2018 and PD/BDE/142900/2018

    Proposal of an approach for the design and implementation of a data mesh

    Get PDF
    Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de InformaçãoAtualmente existe uma tendência, cada vez mais acentuada, para a utilização de software por parte da esmagadora maioria da população (aplicações de caráter social, software de gestão, plataformas e-commerce, entre outros), identificando-se a criação e armazenamento de dados que, devido às suas características (volume, variedade e velocidade), fazem emergir o conceito de Big Data. Nesta área, e para suportar o armazenamento dos dados, Big Data Warehouses e Data Lakes são conceitos cimentados e implementados por várias organizações, de forma a servirem a sua necessidade de tomada de decisão. No entanto, apesar de serem conceitos estabelecidos e aceites pela maioria da comunidade científica e por diversas organizações a nível mundial, tal não elimina a necessidade de melhoria e inovação. É, este contexto, que origina o surgimento do conceito de Data Mesh, propondo arquiteturas de dados decentralizadas. Após a análise das limitações demonstrados pelas arquiteturas monolíticas (e.g., dificuldade em mudar as tecnologias de armazenamento usadas para implementar o sistema de dados), é possível concluir sobre a necessidade de uma mudança de paradigma que tornará as organizações verdadeiramente orientadas aos dados. A Data Mesh consiste, na implementação de uma arquitetura onde os dados se encontram intencionalmente distribuídos por vários nós da Data Mesh e onde não existe caos, uma vez que existem estratégias centralizadas de governança de dados e a garantia de que os princípios fundamentais dos domínios são partilhados por toda a arquitetura. A presente dissertação propõe uma abordagem para a implementação de uma Data Mesh, procurando definir o modelo de domínios do conceito. Após esta definição é proposta de uma arquitetura concetual e tecnológica, que visam a auxiliar a materialização dos conceitos apresentados no modelo de domínios e assim auxiliar na conceção e implementação de uma Data Mesh. Posteriormente é realizada uma prova de conceito, de forma a validar os supracitados modelos, contribuindo com conhecimento técnico e científico relacionado com este conceito emergente.Currently there is an increasingly accentuated trend towards the use of software by most of the population (social applications, management software, e-commerce platforms, among others), identifying the creation and storage of data that, due to its characteristics (volume, variety, and speed), make the concept of Big Data emerge. In this area, and to support data storage, Big Data Warehouses and Data Lakes are solid concept and implemented by various organizations to serve their decision-making needs. However, despite being established and accepted concepts by most of the scientific community and by several organizations worldwide, this does not eliminate the need for improvement and innovation in the field. It is this context that gives rise to the emergence of the Data Mesh concept, proposing decentralized data architectures. After analyzing the limitations demonstrated by monolithic architectures (e.g., difficulty in changing the storage technologies used to implement the data system), it is possible to conclude on the need for a paradigm shift that will make organizations truly data driven. Data Mesh consists, in the implementation of an architecture where data is intentionally distributed over several nodes of the Data Mesh, and where there is no chaos, since there are centralized data governance strategies and the assurance that the fundamental principles of the domains are shared throughout the architecture. This master thesis proposes an approach for the implementation of a Data Mesh, seeking to define the domain model of the concept. After this definition, a conceptual and technological architecture is proposed, which aim to help materialize the concepts presented in the domain model and thus assist in the design and implementation of a Data Mesh. Afterwards a proof-of-concept is carried out, to validate the aforementioned models, contributing with technical and scientific knowledge related to this emerging concept

    A hyperconnected manufacturing collaboration system using the semantic web and Hadoop ecosystem system

    Get PDF
    With the explosive growth of digital data communications in synergistic operating networks and cloud computing service, hyperconnected manufacturing collaboration systems face the challenges of extracting, processing, and analyzing data from multiple distributed web sources. Although semantic web technologies provide the solution to web data interoperability by storing the semantic web standard in relational databases for processing and analyzing of web-accessible heterogeneous digital data, web data storage and retrieval via the predefined schema of relational / SQL databases has become increasingly inefficient with the advent of big data. In response to this problem, the Hadoop Ecosystem System is being adopted to reduce the complexity of moving data to and from the big data cloud platform. This paper proposes a novel approach in a set of the Hadoop tools for information integration and interoperability across hyperconnected manufacturing collaboration systems. In the Hadoop approach, data is “Extracted” from the web sources, “Loaded” into a set of the NoSQL Hadoop Database (HBase) tables, and then “Transformed” and integrated into the desired format model with Hive's schema-on-read. A case study was conducted to illustrate that the Hadoop Extract-Load-Transform (ELT) approach for the syntax and semantics web data integration could be adopted across the global smartphone value chain

    On the use of simulation as a Big Data semantic validator for supply chain management

    Get PDF
    Simulation stands out as an appropriate method for the Supply Chain Management (SCM) field. Nevertheless, to produce accurate simulations of Supply Chains (SCs), several business processes must be considered. Thus, when using real data in these simulation models, Big Data concepts and technologies become necessary, as the involved data sources generate data at increasing volume, velocity and variety, in what is known as a Big Data context. While developing such solution, several data issues were found, with simulation proving to be more efficient than traditional data profiling techniques in identifying them. Thus, this paper proposes the use of simulation as a semantic validator of the data, proposed a classification for such issues and quantified their impact in the volume of data used in the final achieved solution. This paper concluded that, while SC simulations using Big Data concepts and technologies are within the grasp of organizations, their data models still require considerable improvements, in order to produce perfect mimics of their SCs. In fact, it was also found that simulation can help in identifying and bypassing some of these issues.This work has been supported by FCT (Fundacao para a Ciencia e Tecnologia) within the Project Scope: UID/CEC/00319/2019 and by the Doctoral scholarship PDE/BDE/114566/2016 funded by FCT, the Portuguese Ministry of Science, Technology and Higher Education, through national funds, and co-financed by the European Social Fund (ESF) through the Operational Programme for Human Capital (POCH)

    Modelação ágil para sistemas de Big Data Warehousing

    Get PDF
    Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de InformaçãoOs Sistemas de Informação, com a popularização do conceito de Big Data começaram a considerar aspetos relativos às infraestruturas capazes de lidar com a recolha, armazenamento, processamento e análise de vastas quantidades de dados heterogéneos, como pouca estrutura (ou nenhuma) e gerados a velocidades cada vez maiores. Estes têm sido os desafios inerentes à transição do Data Modelling em Data Warehouses tradicionais para ambientes de Big Data. O estado-de-arte reflete que a área científica de Big Data Warehousing é recente, ambígua e apresenta lacunas relativas a abordagens para a conceção e implementação destes sistemas; deste modo, nos últimos anos, vários autores motivados pela ausência de trabalhos científicos e técnicos desenvolveram estudos na área com o intuito de explorar modelos adequados (representação de componentes lógicas e tecnológicas, data flows e estruturas de dados), métodos e instanciações (casos de demonstração recorrendo a protótipos e benchmarks). A presente dissertação está inserida no estudo da proposta geral dos padrões de design para sistemas de Big Data Warehousing (M. Y. Santos & Costa, 2019) e, posteriormente, é efetuada a proposta de um método, em vista a semiautomatização da proposta de design dos autores referidos, constituído por sete regras computacionais, apresentadas, demonstradas e validadas com exemplos baseados em contextos reais. De forma a apresentar o processo de modelação ágil, foi criado um fluxograma para cada regra, permitindo assim apresentar todos passos. Comparando os resultados dos exemplos obtidos após aplicação do método e dos resultantes de uma modelação totalmente manual, o trabalho proposto apresenta uma proposta de modelação geral, que funciona como uma sugestão de modelação de Big Data Warehouses para o utilizador que, posteriormente, deve validar e ajustar o resultado tendo em consideração o contexto do caso em análise, as queries que pretende utilizar e as características dos dados.Information Systems, with the popularization of Big Data, have started to consider the aspects related to infrastructures capable of dealing with collection, storage, processing and analysis of vast amounts of heterogeneous data, with little or no structure and produced at increasing speed. These have been the challenges inherent to the transition from Data Modelling into traditional Data Warehouses for Big Data environments. The state-of-the-art reflects that the scientific field of Big Data Warehousing is recent, ambiguous and that it shows a few gaps regarding the approaches to the design and implementation of these systems; thus, in the past few years, several authors, motivated by the lack of scientific and technical work, have developed some studies in this scientific area in order to explore appropriated models (representation of logical and technological components, data flows and data structures), methods and instantiations (demonstration cases using prototypes and benchmarks). This dissertation is inserted in the study of the general proposal of design standards for Big Data Warehousing systems. Late on, the proposed method is comprised of seven sequential rules which are thoroughly explained, demonstrated and validated with relevante exemples based on common real use-cases. For each rule, step-by-step flowchart is provider an agile modelling process. When compared a fully manual example, the proposed work offered a correct but genereal resulting model that works best as a first modelling effort that should then be validated by a use-case expert

    Are simulation tools ready for big data? Computational experiments with supply chain models developed in Simio

    Get PDF
    Peer-review under responsibility of the scientific committee of the International Conference on Industry 4.0 and Smart Manufacturing. The need and potential benefits for the combined use of Simulation and Big Data in Supply Chains (SCs) has been widely recognized. Having worked on such project, some simulation experiments of the modelled SC system were conducted in SIMIO. Different circumstances were tested, including running the model based on the stored data, on statistical distributions and considering risk situations. Thus, this paper aimed to evaluate such experiments, to evaluate the performance of simulations in these contexts. After analyzing the obtained results, it was found that whilst running the model based on the real data required considerable amounts of computer memory, running the model based on statistical distributions reduced such values, albeit required considerable higher time to run a single replication. In all the tested experiments, the simulation took considerable time to run and was not smooth, which can reduce the stakeholders' interest in the developed tool, despite its benefits for the decision-making process. For future researches, it would be beneficial to test other simulation tools and other strategies and compare those results to the ones provided in this paper.This work has been supported by national funds through FCT – Fundação para a Ciência e Tecnologia within the Project Scope: UID/CEC/00319/2019 and by the Doctoral scholarship PDE/BDE/114566/2016 funded by FCT, the Portuguese Ministry of Science, Technology and Higher Education, through national funds, and co-financed by the European Social Fund (ESF) through the Operational Programme for Human Capital (POCH)

    Simulation of an automotive supply chain using big data

    Get PDF
    Supply Chains (SCs) are dynamic and complex networks that are exposed to disruption, which have consequences hard to quantify. Thus, simulation may be used, as it allows the uncertainty and dynamic nature of systems to be considered. Furthermore, the several systems used in SCs generate data with increasingly high volumes and velocities, paving the way for the development of simulation models in Big Data contexts. Hence, contrarily to traditional simulation approaches, which use statistical distributions to model specific SC problems, this paper proposed a Decision-Support System, supported by a Big Data Warehouse (BDW) and a simulation model. The first stores and integrates data from multiple sources and the second reproduces movements of materials and information from such data, while it also allows risk scenarios to be analyzed. The obtained results show the model being used to reproduce the historical data stored in the BDW and to assess the impact of events triggered during runtime to disrupt suppliers in a geographical range. This paper also analyzes the volume of data that was managed, hoping to serve as a milestone for future SC simulation studies in Big Data contexts. Further conclusions and future work are also discussed.This work has been supported by FCT (Fundacao para a Ciencia e Tecnologia) within the Project Scope: UID/CEC/00319/2019 and by the Doctoral scholarship PDE/BDE/114566/2016 funded by FCT, the Portuguese Ministry of Science, Technology and Higher Education, through national funds, and co-financed by the European Social Fund (ESF) through the Operational Programme for Human Capital (POCH)
    corecore