38 research outputs found

    Indústria 4.0 : o impacto do Big Data e internet of things

    Get PDF
    Mestrado em Ciências EmpresariaisA tecnologia possuí um papel cada vez mais importante, na economia, e por consequência nas organizações. O impacto está patente na quarta revolução industrial e nas consequências que esta tem nas organizações. É por isso considerado que a sua análise é de importante relevo. A Indústria 4.0 é considerada como um conjunto de tecnologias que estão a emergir na atualidade, estando ainda pouco estudadas, sendo estas balizadas por um conjunto de características que as definem como integrantes da quarta revolução industrial. Assim, o presente estudo procura estudar, através da análise de artigos científicos e de investigação, quais os impactos que as tecnologias Big Data e Internet of Things têm na Indústria 4.0. A revisão de literatura permitiu estabelecer seis critérios essenciais da a Indústria 4.0: interoperabilidade, virtualização, descentralização, capacidade em tempo real, orientação para o serviço e modularização. Para a análise do impacto das duas tecnologias referidas foi constituída uma amostra de vinte artigos muito citados, utilizando a métrica h-index, onde foram aplicados os seis critérios e efetuada uma análise de clusters hierárquicos. As conclusões deste estudo indicam que existe uma clara tendência de utilização de três dos critérios: virtualização, análise em tempo real e orientação para o serviço nas referidas tecnologias. Deste modo, o impacto do Big Data e da Internet of Things na definição da Indústria 4.0 é maioritariamente constituída por metade dos critérios que definem a quarta revolução industrial.Technology in the economy, and consequently in organizations, plays an increasingly role. The impact is evident in the fourth industrial revolution and the consequences it has on organizations. In this way, it is considered that its analysis is important. Industry 4.0 being a set of technologies that are emerging today, being still little studied, these are marked out in a set of characteristics that define them as part of the fourth industrial revolution. Therefore, the present study seeks to analyze, through scientific and research articles, the impacts that Big Data and Internet of Things technologies have on Industry 4.0. The literature review allowed us to classify six characteristics: interoperability, virtualization, decentralization, real-time capability, service orientation and modularization. For the analysis of the impact of the two technologies a sample of twenty articles was constituted using the h-index metric where the six criteria were applied and a hierarchical clusters analysis was performed. The conclusions of this study indicate that there is a clear tendency to use three criteria: virtualization, real-time analysis and service orientation in these technologies. Furthermore, the impact of Big data and Internet of Things on the definition of Industry 4.0 is mostly constituted by half of the criteria that define the fourth industrial revolution.info:eu-repo/semantics/publishedVersio

    Managing changes initiated by industrial big data technologies : a technochange management model

    Get PDF
    With the adoption of Internet of Things and advanced data analytical technologies in manufacturing firms, the industrial sector has launched an evolutionary journey toward the 4th industrial revolution, or so called Industry 4.0. Industrial big data is a core component to realize the vision of Industry 4.0. However, the implementation and usage of industrial big data tools in manufacturing firms will not merely be a technical endeavor, but can also lead to a thorough management reform. By means of a comprehensive review of literature related to Industry 4.0, smart manufacturing, industrial big data, information systems (IS) and technochange management, this paper aims to analyze potential changes triggered by the application of industrial big data in manufacturing firms, from technological, individual and organizational perspectives. Furthermore, in order to drive these changes more effectively and eliminate potential resistance, a conceptual technochange management model was developed and proposed. Drawn upon theories reported in literature of IS technochange management, this model proposed four types of interventions that can be used to copy with changes initiated by industrial big data technologies, including human process intervention, techno-structural intervention, human resources management intervention and strategic intervention. This model will be of interests and value to practitioners and researchers concerned with business reforms triggered by Industry 4.0 in general and by industrial big data technologies in particular

    Assessment of Cyber Risks in an IoT-based Supply Chain using a Fuzzy Decision-Making Method

    Get PDF
    Purpose: The Internet of Things (IoT) is a relatively new paradigm that is growing rapidly in modern wireless communication scenarios. The main idea of this concept is the pervasive presence of all kinds of objects around us. This technology is the basis of today's intelligent life and is known as one of the most important sources of big data. Meanwhile, businesses are no exception to this rule and try to use the Internet of Things to make their business smarter. Supply chain management is a goal-based goal of linking business operations to provide a common view of market opportunity. Methodology: Using IoT technology, all major parts of the supply chain, including supply, production, distribution and sales, can be affected. Because this evolutionary technology is intertwined with Internet technology, the use of network-based tools can always create risks for business owners who use these technologies. Therefore, understanding and investigating a variety of cyber risks in this area can It is very important and by understanding their hands, we can prevent many future risks. Linear analysis based on hierarchical analysis is used. Findings: The results show that privacy is very important in interaction with suppliers as well as customers, and therefore those effective measures to deal with these risks can reduce many of the problems caused by this technology. Originality/Value: This paper attend to assessment of cyber risks in an IoT-based supply chain using a fuzzy decision-making method

    A data-driven approach for quality analytics of screwing processes in a global learning factory

    Get PDF
    Quality problems of screwing processes in assembly systems, which are an important issue for operation excellence, needs to be quickly analyzed and solved. A network can be very beneficial for root cause analysis due to different data from various factories. Nevertheless, it is difficult to obtain reliable and consistent data. In this context, this paper aims to develop a method for data-driven oriented quality analytics of screwing processes considering a global production network. Firstly, the overview of data structure is introduced. Further, the data transformation is modelled for edge- and cloud-based analytics across the global production network. Lastly, the rules for analyzing are identified. A joint case study based on Learning Factory Global Production (LF) in Germany and I4.0 Innovation Centre and Artificial Intelligence Innovation Factory (IC&AIIF) in China is used to validate the proposed approach, which is also a new teaching method for quality analysis in the framework of learning factory

    Data Pipeline Management in Practice: Challenges and Opportunities

    Get PDF
    Data pipelines involve a complex chain of interconnected activities that starts with a data source and ends in a data sink. Data pipelines are important for data-driven organizations since a data pipeline can process data in multiple formats from distributed data sources with minimal human intervention, accelerate data life cycle activities, and enhance productivity in data-driven enterprises. However, there are challenges and opportunities in implementing data pipelines but practical industry experiences are seldom reported. The findings of this study are derived by conducting a qualitative multiple-case study and interviews with the representatives of three companies. The challenges include data quality issues, infrastructure maintenance problems, and organizational barriers. On the other hand, data pipelines are implemented to enable traceability, fault-tolerance, and reduce human errors through maximizing automation thereby producing high-quality data. Based on multiple-case study research with five use cases from three case companies, this paper identifies the key challenges and benefits associated with the implementation and use of data pipelines

    Improving Pipelining Tools for Pre-processing Data

    Get PDF
    The last several years have seen the emergence of data mining and its transformation into a powerful tool that adds value to business and research. Data mining makes it possible to explore and find unseen connections between variables and facts observed in different domains, helping us to better understand reality. The programming methods and frameworks used to analyse data have evolved over time. Currently, the use of pipelining schemes is the most reliable way of analysing data and due to this, several important companies are currently offering this kind of services. Moreover, several frameworks compatible with different programming languages are available for the development of computational pipelines and many research studies have addressed the optimization of data processing speed. However, as this study shows, the presence of early error detection techniques and developer support mechanisms is very limited in these frameworks. In this context, this study introduces different improvements, such as the design of different types of constraints for the early detection of errors, the creation of functions to facilitate debugging of concrete tasks included in a pipeline, the invalidation of erroneous instances and/or the introduction of the burst-processing scheme. Adding these functionalities, we developed Big Data Pipelining for Java (BDP4J, https://github.com/sing-group/bdp4j), a fully functional new pipelining framework that shows the potential of these features

    Technological troubleshooting based on sentence embedding with deep transformers

    Get PDF
    AbstractIn nowadays manufacturing, each technical assistance operation is digitally tracked. This results in a huge amount of textual data that can be exploited as a knowledge base to improve these operations. For instance, an ongoing problem can be addressed by retrieving potential solutions among the ones used to cope with similar problems during past operations. To be effective, most of the approaches for semantic textual similarity need to be supported by a structured semantic context (e.g. industry-specific ontology), resulting in high development and management costs. We overcome this limitation with a textual similarity approach featuring three functional modules. The data preparation module provides punctuation and stop-words removal, and word lemmatization. The pre-processed sentences undergo the sentence embedding module, based on Sentence-BERT (Bidirectional Encoder Representations from Transformers) and aimed at transforming the sentences into fixed-length vectors. Their cosine similarity is processed by the scoring module to match the expected similarity between the two original sentences. Finally, this similarity measure is employed to retrieve the most suitable recorded solutions for the ongoing problem. The effectiveness of the proposed approach is tested (i) against a state-of-the-art competitor and two well-known textual similarity approaches, and (ii) with two case studies, i.e. private company technical assistance reports and a benchmark dataset for semantic textual similarity. With respect to the state-of-the-art, the proposed approach results in comparable retrieval performance and significantly lower management cost: 30-min questionnaires are sufficient to obtain the semantic context knowledge to be injected into our textual search engine

    Improving pipelining tools for pre-processing data

    Get PDF
    The last several years have seen the emergence of data mining and its transformation into a powerful tool that adds value to business and research. Data mining makes it possible to explore and find unseen connections between variables and facts observed in different domains, helping us to better understand reality. The programming methods and frameworks used to analyse data have evolved over time. Currently, the use of pipelining schemes is the most reliable way of analysing data and due to this, several important companies are currently offering this kind of services. Moreover, several frameworks compatible with different programming languages are available for the development of computational pipelines and many research studies have addressed the optimization of data processing speed. However, as this study shows, the presence of early error detection techniques and developer support mechanisms is very limited in these frameworks. In this context, this study introduces different improvements, such as the design of different types of constraints for the early detection of errors, the creation of functions to facilitate debugging of concrete tasks included in a pipeline, the invalidation of erroneous instances and/or the introduction of the burst-processing scheme. Adding these functionalities, we developed Big Data Pipelining for Java (BDP4J, https://github.com/sing-group/bdp4j), a fully functional new pipelining framework that shows the potential of these features.Agencia Estatal de Investigación | Ref. TIN2017-84658-C2-1-RXunta de Galicia | Ref. ED481D-2021/024Xunta de Galicia | Ref. ED431C2018/55-GR
    corecore