42,778 research outputs found

    Big data need big theory too.

    Get PDF
    The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry. Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales. Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their 'depth' and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data. Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote 'blind' big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'

    Bosch's industry 4.0 advanced Data Analytics: historical and predictive data integration for decision support

    Get PDF
    Industry 4.0, characterized by the development of automation and data exchanging technologies, has contributed to an increase in the volume of data, generated from various data sources, with great speed and variety. Organizations need to collect, store, process, and analyse this data in order to extract meaningful insights from these vast amounts of data. By overcoming these challenges imposed by what is currently known as Big Data, organizations take a step towards optimizing business processes. This paper proposes a Big Data Analytics architecture as an artefact for the integration of historical data - from the organizational business processes - and predictive data - obtained by the use of Machine Learning models -, providing an advanced data analytics environment for decision support. To support data integration in a Big Data Warehouse, a data modelling method is also proposed. These proposals were implemented and validated with a demonstration case in a multinational organization, Bosch Car Multimedia in Braga. The obtained results highlight the ability to take advantage of large amounts of historical data enhanced with predictions that support complex decision support scenarios.This work has been supported by FCT -Fundacao para a Ciencia e Tecnologia within the Project Scope: UIDB/00319/2020, the Doctoral scholarships PD/BDE/135100/2017 and PD/BDE/135105/2017, and European Structural and Investment Funds in the FEDER component, through the Operational Competitiveness and Internationalization Programme (COMPETE 2020) [Project n degrees 039479; Funding Reference: POCI-01-0247-FEDER039479]. The authors also wish to thank the automotive electronics company staff involved with this project for providing the data and valuable domain feedback. This paper uses icons made by Freepik, from www.flaticon.com

    Achieving Strategic Flexibility in the Era of Big Data: The Importance of Knowledge Management and Ambidexterity

    Get PDF
    Purpose – This research unpacks the micro-mechanisms that exist between an organisation’s ability to conduct Big Data Analytics (BDA) and its achievement of strategic flexibility. Knowledge management capabilities and organisational ambidexterity have long been considered factors influencing the aforementioned relationship. In order to assess this, the authors build on dynamic capabilities as the main theoretical lens through which to examine. Design/methodology/approach – Structural Equation Modelling (SEM) is the main methodological approach used in this research. A structural model was developed and tested based on 215 survey responses collected from managers of organisations in continental Europe. Findings – The results indicate that BDA capabilities are a significant antecedent of an organisation’s strategic flexibility. This relationship, however, is influenced by knowledge management capabilities and ambidexterity. Practical implications – Managers wishing to properly exploit the potential of big data should invest in the elaboration of knowledge management processes across their organisation. This strategy can foster strategic flexibility. Originality/value – Previous research has explored the theoretical links between big data, knowledge management, and strategic flexibility. However, little attention has been paid to the quantitative investigation of the phenomenon

    Big data analytics:Computational intelligence techniques and application areas

    Get PDF
    Big Data has significant impact in developing functional smart cities and supporting modern societies. In this paper, we investigate the importance of Big Data in modern life and economy, and discuss challenges arising from Big Data utilization. Different computational intelligence techniques have been considered as tools for Big Data analytics. We also explore the powerful combination of Big Data and Computational Intelligence (CI) and identify a number of areas, where novel applications in real world smart city problems can be developed by utilizing these powerful tools and techniques. We present a case study for intelligent transportation in the context of a smart city, and a novel data modelling methodology based on a biologically inspired universal generative modelling approach called Hierarchical Spatial-Temporal State Machine (HSTSM). We further discuss various implications of policy, protection, valuation and commercialization related to Big Data, its applications and deployment

    Linking business analytics to decision making effectiveness: a path model analysis

    Get PDF
    While business analytics is being increasingly used to gain data-driven insights to support decision making, little research exists regarding the mechanism through which business analytics can be used to improve decision-making effectiveness (DME) at the organizational level. Drawing on the information processing view and contingency theory, this paper develops a research model linking business analytics to organizational DME. The research model is tested using structural equation modeling based on 740 responses collected from U.K. businesses. The key findings demonstrate that business analytics, through the mediation of a data-driven environment, positively influences information processing capability, which in turn has a positive effect on DME. The findings also demonstrate that the paths from business analytics to DME have no statistical differences between large and medium companies, but some differences between manufacturing and professional service industries. Our findings contribute to the business analytics literature by providing useful insights into business analytics applications and the facilitation of data-driven decision making. They also contribute to manager's knowledge and understanding by demonstrating how business analytics should be implemented to improve DM

    How can SMEs benefit from big data? Challenges and a path forward

    Get PDF
    Big data is big news, and large companies in all sectors are making significant advances in their customer relations, product selection and development and consequent profitability through using this valuable commodity. Small and medium enterprises (SMEs) have proved themselves to be slow adopters of the new technology of big data analytics and are in danger of being left behind. In Europe, SMEs are a vital part of the economy, and the challenges they encounter need to be addressed as a matter of urgency. This paper identifies barriers to SME uptake of big data analytics and recognises their complex challenge to all stakeholders, including national and international policy makers, IT, business management and data science communities. The paper proposes a big data maturity model for SMEs as a first step towards an SME roadmap to data analytics. It considers the ‘state-of-the-art’ of IT with respect to usability and usefulness for SMEs and discusses how SMEs can overcome the barriers preventing them from adopting existing solutions. The paper then considers management perspectives and the role of maturity models in enhancing and structuring the adoption of data analytics in an organisation. The history of total quality management is reviewed to inform the core aspects of implanting a new paradigm. The paper concludes with recommendations to help SMEs develop their big data capability and enable them to continue as the engines of European industrial and business success. Copyright © 2016 John Wiley & Sons, Ltd.Peer ReviewedPostprint (author's final draft
    • …
    corecore