518,908 research outputs found

    Big data, modeling, simulation, computational platform and holistic approaches for the fourth industrial revolution

    Get PDF
    Naturally, the mathematical process starts from proving the existence and uniqueness of the solution by the using the theorem, corollary, lemma, proposition, dealing with the simple and non-complex model. Proving the existence and uniqueness solution are guaranteed by governing the infinite amount of solutions and limited to the implementation of a small-scale simulation on a single desktop CPU. Accuracy, consistency and stability were easily controlled by a small data scale. However, the fourth industrial can be described the mathematical process as the advent of cyber-physical systems involving entirely new capabilities for researcher and machines (Xing, 2017). In numerical perspective, the fourth industrial revolution (4iR) required the transition from a uncomplex model and small scale simulation to complex model and big data for visualizing the real-world application in digital dialectical and exciting opportunity. Thus, a big data analytics and its classification are a problem solving for these limitations. Some applications of 4iR will highlight the extension version in terms of models, derivative and discretization, dimension of space and time, behavior of initial and boundary conditions, grid generation, data extraction, numerical method and image processing with high resolution feature in numerical perspective. In statistics, a big data depends on data growth however, from numerical perspective, a few classification strategies will be investigated deals with the specific classifier tool. This paper will investigate the conceptual framework for a big data classification, governing the mathematical modeling, selecting the superior numerical method, handling the large sparse simulation and investigating the parallel computing on high performance computing (HPC) platform. The conceptual framework will benefit to the big data provider, algorithm provider and system analyzer to classify and recommend the specific strategy for generating, handling and analyzing the big data. All the perspectives take a holistic view of technology. Current research, the particular conceptual framework will be described in holistic terms. 4iR has ability to take a holistic approach to explain an important of big data, complex modeling, large sparse simulation and high performance computing platform. Numerical analysis and parallel performance evaluation are the indicators for performance investigation of the classification strategy. This research will benefit to obtain an accurate decision, predictions and trending practice on how to obtain the approximation solution for science and engineering applications. As a conclusion, classification strategies for generating a fine granular mesh, identifying the root causes of failures and issues in real time solution. Furthermore, the big data-driven and data transfer evolution towards high speed of technology transfer to boost the economic and social development for the 4iR (Xing, 2017; Marwala et al., 2017)

    Design Trend Forecasting by Combining Conceptual Analysis and Semantic Projections: New Tools for Open Innovation

    Full text link
    [EN] In this paper, we describe a new trend analysis and forecasting method (Deflexor), which is intended to help inform decisions in almost any field of human social activity, including, for example, business, art and design. As a result of the combination of conceptual analysis, fuzzy mathematics and some new reinforcing learning methods, we propose an automatic procedure based on Big Data that provides an assessment of the evolution of design trends. The resulting tool can be used to study general trends in any field¿depending on the data sets used¿while allowing the evaluation of the future acceptance of a particular design product, becoming in this way, a new instrument for Open Innovation. The mathematical characterization of what is a semantic projection, together with the use of the theory of Lipschitz functions in metric spaces, provides a broad-spectrum predictive tool. Although the results depend on the data sets used, the periods of updating and the sources of general information, our model allows for the creation of specific tools for trend analysis in particular fields that are adaptable to different environments.This research was funded by Istituto Europeo di Design and Generalitat Valenciana, Cátedra de Transparencia y Gestión de Datos, Universitat Politècnica de València (PID2019-105708RBC21 (MICIU/FEDER,UE)).Manetti, A.; Ferrer Sapena, A.; Sánchez Pérez, EA.; Lara-Navarra, P. (2021). Design Trend Forecasting by Combining Conceptual Analysis and Semantic Projections: New Tools for Open Innovation. Journal of Open Innovation: Technology, Market, and Complexity. 7(1):1-26. https://doi.org/10.3390/joitmc7010092S1267

    An Internet-of-Things (IoT) system development and implementation for bathroom safety enhancement

    Get PDF
    Statistics show that a bathroom is one of the most hazardous places especially for older people. Older people typically have greater difficulties with mobility and balance, making them more vulnerable to fall and slip injuries in a bathroom and causing serious health issues related to short and long-term well-being. Various components in a bathroom including shower, tub, floor, and toilet have been re-designed, and independently upgraded their ergonomics and safety aspects; however, the number of bathroom injuries remains consistently high in general. Internet-of-Things (IoT) is a new concept applicable to almost everywhere and man-made objects. Wireless sensors detect abnormalities and send data through the network. A large amount of data can be collected from multiple IoT systems and it can be utilized for a big data analysis. The big data may reveal a hidden positive outcome beyond the initially intended purposes. A few commercial IoT applications such as wearable health monitoring and intelligent transportation systems are available. Nevertheless, An IoT application for a bathroom is not currently known. Unlike other applications, bathrooms have some unique aspects such as privacy and wet environment. This paper presents a holistic conceptual approach of an Internet-of-Things (IoT) system development and implementation to enhance bathroom safety. The concept focuses on the application in a large nursing care facility as a pilot testing bed. Authors propose 1) sensor selection and application, 2) integration of a wireless sensor local network system, 3) design concept for IoT implementation, and 4) a big data analysis system model in this paper

    IMPLEMENTATION OF EDUCATION MODEL 4.0: DEVELOPING INDUSTRY 4.0 SKILLS IN GRADUATES ENGINEERS FOR IMPROVING EMPLOYABILITY SKILLS

    Get PDF
    Purpose of the study: With the growing technologies, unemployment is a big issue for youngsters, which is a big challenge in itself due to lack of skills among the youngsters The main purpose of this study is to shows, how the education 4.0, in-line with industry 4.0, to develop and optimize personalized education, would ultimately determine how young people of the future will work and live in. Methodology: Both quantitative and qualitative data are used in this research study. For this research as a survey questionnaire is used for collecting primary data. Data Collection and analysis methods are adopted wherein the questionnaire is circulated amongst various stakeholders like engineering students, faculty and industry experts in Pune City.  Main Findings: A conceptual framework is proposed to identify factors affecting uncertainty in employability and implementation of education 4.0 model aligned with industry 4.0 to overcome these factors for developing existing education is studied here. Applications of this study: Higher and technical educational institutes where every year more students are graduated with a technical education but a large percentage of them are not able to get employment. IT and manufacturing companies found a lack of skills in them and not ready to hire them. An educational institute is failing to fill this gap and more uncertainty in employment has occurred. This research will be helpful to the educational institutes to implement an education 4.0 model to enhance new parameters for restructuring the existing education that would be benefited for students and institutes to meet the requirements of industries. Novelty/Originality of this study: This study has proposed a new conceptual framework of identifying the factors that cause of unemployment gap between academia and industries. This study has focused on the implementation of education 4.0 model in line with industry 4.0 where readers will learn how a new model of education will be helpful to educational institutes in developing their existing education methods as well as will be benefited to the graduated students in terms of developing skills to cope with new emerging technologies.&nbsp

    Towards a Reference Architecture with Modular Design for Large-scale Genotyping and Phenotyping Data Analysis: A Case Study with Image Data

    Get PDF
    With the rapid advancement of computing technologies, various scientific research communities have been extensively using cloud-based software tools or applications. Cloud-based applications allow users to access software applications from web browsers while relieving them from the installation of any software applications in their desktop environment. For example, Galaxy, GenAP, and iPlant Colaborative are popular cloud-based systems for scientific workflow analysis in the domain of plant Genotyping and Phenotyping. These systems are being used for conducting research, devising new techniques, and sharing the computer assisted analysis results among collaborators. Researchers need to integrate their new workflows/pipelines, tools or techniques with the base system over time. Moreover, large scale data need to be processed within the time-line for more effective analysis. Recently, Big Data technologies are emerging for facilitating large scale data processing with commodity hardware. Among the above-mentioned systems, GenAp is utilizing the Big Data technologies for specific cases only. The structure of such a cloud-based system is highly variable and complex in nature. Software architects and developers need to consider totally different properties and challenges during the development and maintenance phases compared to the traditional business/service oriented systems. Recent studies report that software engineers and data engineers confront challenges to develop analytic tools for supporting large scale and heterogeneous data analysis. Unfortunately, less focus has been given by the software researchers to devise a well-defined methodology and frameworks for flexible design of a cloud system for the Genotyping and Phenotyping domain. To that end, more effective design methodologies and frameworks are an urgent need for cloud based Genotyping and Phenotyping analysis system development that also supports large scale data processing. In our thesis, we conduct a few studies in order to devise a stable reference architecture and modularity model for the software developers and data engineers in the domain of Genotyping and Phenotyping. In the first study, we analyze the architectural changes of existing candidate systems to find out the stability issues. Then, we extract architectural patterns of the candidate systems and propose a conceptual reference architectural model. Finally, we present a case study on the modularity of computation-intensive tasks as an extension of the data-centric development. We show that the data-centric modularity model is at the core of the flexible development of a Genotyping and Phenotyping analysis system. Our proposed model and case study with thousands of images provide a useful knowledge-base for software researchers, developers, and data engineers for cloud based Genotyping and Phenotyping analysis system development

    Assessing factors of behavioral intention to use Big Data Analytics (BDA) in banking and insurance sector: proposition of an integrated model

    Get PDF
    Banking and insurance sectors have long been largely data-driven by nature. However, with the rise in the predominance of data flooding from several sources resulting from the introduction of new customers and markets, with the help of Big Data Analytics, value can be extracted more effectively, and analysis of this type of unstructured data combined with a wide range of datasets can be used to efficiently and precisely extract commercial value. The aim of this paper is to develop a conceptual framework to explain the intention of information technology practitioners in banks and insurance companies to use Big Data Analytics by exploiting the Technology Acceptance Model (TAM) joined by the Task-Technology-Fit paradigm, information quality, security, trust, and the moderating effect of managerial commitment by top management on the relationship between users’ perception and their intention to use, in order to conceptualize and test an integrated framework for analyzing and measuring attitudes toward the usage of Big Data Analytics. This paper contributes by proposing the model to assess the factors that influence users’ intention towards the use of Big Data Analytics, by asserting users’ perception towards the technology, trust factor, security and the effect of managerial commitment. Although the model we developed in this paper is conceptual and still needs to be tested empirically, it will serve as a basic framework for further research that is designed to evaluate factors affecting IT practitioners’ attitudes towards the adoption of Big Data Analytics within the finance sector.   Keywords: Big Data Analytics, TAM, TTF, Security, Trust, Managerial commitment, Bank, Insurance  JEL Classification: O32 Paper type: Theoretical ResearchLes secteurs de la banque et de l'assurance sont depuis longtemps largement axĂ©s sur les donnĂ©es par nature. Cependant, avec l'augmentation de la prĂ©dominance de l'inondation de donnĂ©es provenant de plusieurs sources rĂ©sultant de l'introduction de nouveaux clients et marchĂ©s, avec l'aide du Big Data Analytics, la valeur peut ĂŞtre obtenue plus efficacement, et l'analyse de ce type de donnĂ©es non structurĂ©es combinĂ©es Ă  un large Ă©ventail d'ensembles de donnĂ©es peut ĂŞtre utilisĂ©e pour extraire efficacement et prĂ©cisĂ©ment la valeur commerciale. L'objectif de cet article est de dĂ©velopper un cadre conceptuel pour expliquer l'intention des praticiens des technologies de l'information dans les banques et les compagnies d'assurance d'utiliser le Big Data Analytics en exploitant le Modèle d'Acceptation de la Technologie (TAM) associĂ© au paradigme AdĂ©quation Tache-Technologie, la qualitĂ© de l'information, la sĂ©curitĂ©, la confiance et l'effet modĂ©rateur de l'engagement du management sur la relation entre la perception des utilisateurs et leur intention d'utilisation, afin de conceptualiser et de tester un cadre intĂ©grĂ© pour analyser et mesurer les attitudes envers l'utilisation du Big Data Analytics. Cet article contribue en proposant un modèle pour Ă©valuer les facteurs qui influencent l'intention des utilisateurs vers l'utilisation du Big Data Analytics, en affirmant la perception des utilisateurs envers la technologie, le facteur de confiance, la sĂ©curitĂ© et l'effet de l'engagement managĂ©rial. Bien que le modèle que nous avons dĂ©veloppĂ© dans cet article soit conceptuel et nĂ©cessite encore d'ĂŞtre testĂ© empiriquement, il servira de cadre de base pour des recherches ultĂ©rieures conçues pour Ă©valuer les facteurs affectant les attitudes des informaticiens envers l'adoption du Big Data Analytics dans le secteur financier.   Keywords: Big Data Analytics, TAM, TTF, Security, Trust, Managerial commitment, Bank, Insurance  JEL Classification: O32 Paper type: Theoretical Researc

    Intersection of Data Science and Smart Destinations: A Systematic Review.

    Get PDF
    This systematic review adopts a formal and structured approach to review the intersection of data science and smart tourism destinations in terms of components found in previous research. The study period corresponds to 1995-2021 focusing the analysis mainly on the last years (2015-2021), identifying and characterizing the current trends on this research topic. The review comprises documentary research based on bibliometric and conceptual analysis, using the VOSviewer and SciMAT software to analyze articles from the Web of Science database. There is growing interest in this research topic, with more than 300 articles published annually. Data science technologies on which current smart destinations research is based include big data, smart data, data analytics, social media, cloud computing, the internet of things (IoT), smart card data, geographic information system (GIS) technologies, open data, artificial intelligence, and machine learning. Critical research areas for data science techniques and technologies in smart destinations are public tourism marketing, mobility-accessibility, and sustainability. Data analysis techniques and technologies face unprecedented challenges and opportunities post-coronavirus disease-2019 (COVID-19) to build on the huge amount of data and a new tourism model that is more sustainable, smarter, and safer than those previously implemented

    ANALYSING AND EXPLORING DRIFTS IN INNOVATION STREAMS WITHIN OPEN SOURCE (5)

    Get PDF
    This work explores empirically the Apache Hadoop in the context of outbound open innovation (OI) in small and medium- sized enterprises (SMEs) through the lens of innovation streams. The Apache Hadoop is a free and open source (F/OSS) library of codes for distributed computer processing, and it is the industry standard for big data analysis. We are living in the big data age and this research focus on big data analysis digital service platforms. Organisations have radically changed the way they store, manipulate, and create value from information. These data were seen, not very long time ago, as worthless. Businesses are obtaining data from different sources and in diverse formats, and advancing new products and services. Organisations need to explore and exploit niche F/OSS products and services based on outbound OI. Some private sector SMEs are short of tools and require more awareness of the potential benefits of outbound OI for product and service development and the lens of innovation streams offers a multitude of opportunities for analysis. New concepts of value production were brought to light by the notion of OI, including F/OSS. Some private sector businesses lack desorptive capacity, and the proposed conceptual model advances an alternative to the status quo. There is a substantial sum of works on F/OSS, OI and service digital platforms. References to these subjects through the lens of innovation streams in the particular context of the outbound OI in SMEs within the Apache Hadoop appear to be very limited, and there are very few examples of similar studies in this area. Outbound OI is still a major challenge for most firms, some authorities have highlighted the lack of research in the field and expressed the need for complementary studies. Innovation streams are a set of innovations that build upon the current products and services of an organisation, extend that organisation’s technical direction, and/or help it diversify into different markets. Outbound OI in F/OSS SMEs’ technology spin-offs relates to the innovation streams paradigm in terms of discontinuous innovation. While Michael Tushman and his colleagues have formulated innovation streams in detail, the relation of this framework to the F/OSS outbound OI debate within the Apache Hadoop in SMEs is taken for granted. Many questions regarding this relationship still remain, and this work addresses some of these unanswered issues. This doctoral research endorses the view of an evident limitation in the outbound OI literature, replies to aforementioned calls for more research, and adds to prior analyses by advancing new tools for the comprehension of the role of outbound OI in SMEs. It adds to the emergent body of empirical work on the Apache Hadoop and the current frame of literature on service digital platforms. Its potential findings have implications for both academia and organisations offering big data products and services. Drawing on the qualitative interpretive case study tradition, this research explores theoretical ideas and relates them to the real-world context of Apache Hadoop. This interpretive case study offers suggestions to the following overall research questions: (1) How do innovation streams within the Apache Hadoop evolve from explorative to exploitative and, finally, branch out into new markets? (2) How can we promote and sustain innovation streams within the Apache Hadoop in SMEs, in the context of outbound OI? (3) Can a conceptual model be built? (4) Are these methods adaptable

    Integrated double loop data driven model for public policymaking

    Get PDF
    Public policy is the critical key of the welfare programs. An Integrated Double Loop Data Driven Model is proposed to assist in Public Policymaking in solving public problems in a holistic and complete way. The Integrated Model consists of two components: Data Driven Model and Double Loop Model. The first model utilizes Big Data in E-Government Maturity (EM) for Public Policymaking (PP), in which a new E-Government Maturity Model is proposed to support data driven public policymaking based on Big Data. Subsequently, the second model adopts the double loop learning in System Dynamics (SD) whereby, a case study is discussed to show on how to utilize Big Data and System Dynamics for Public Policymaking. The interaction between Big Data, System Dynamics and Public Policymaking are captured in one conceptual model. A new method, System Breakdown Structure (SBS), to bridge between the two models is introduced in the case study. PLS-SEM test on the relationship between EM, SD and PP supports the positive correlation between EM to SD, SD to PP and EM to PP. The R square of PP is 0.48 indicating a high confidence level of the contribution of EM and SD in PP. The R square of SD is 0.45. Both results are also emphasized by the path coefficient result, whereby the path coefficient between EM to SD and SD to PP is higher than 50%. By comparing the path coefficient of EM to PP with and without the SD, the strong influence of SD puts it as full mediation effect. This result would also be similar if the multi-group analysis were conducted. Only for certain paths, there are significant statistical differences between each group; however they still produce positive correlations. The paths are EM to SD path in multigroup analysis of PNS (civil servant) and non-PNS (non-civil servant) and EM to PP path in multigroup analysis of Java and Sumatera islands (more developed region) and Other islands (less developed region). Based on the case study and PLS-SEM test, the Integrated Double Loop Data Driven Model is recommended to be implemented to assist in solving public problem

    MODELING LARGE-SCALE CROSS EFFECT IN CO-PURCHASE INCIDENCE: COMPARING ARTIFICIAL NEURAL NETWORK TECHNIQUES AND MULTIVARIATE PROBIT MODELING

    Get PDF
    This dissertation examines cross-category effects in consumer purchases from the big data and analytics perspectives. It uses data from Nielsen Consumer Panel and Scanner databases for its investigations. With big data analytics it becomes possible to examine the cross effects of many product categories on each other. The number of categories whose cross effects are studied is called category scale or just scale in this dissertation. The larger the category scale the higher the number of categories whose cross effects are studied. This dissertation extends research on models of cross effects by (1) examining the performance of MVP model across category scale; (2) customizing artificial neural network (ANN) techniques for large-scale cross effect analysis; (3) examining the performance of ANN across scale; and (4) developing a conceptual model of spending habits as a source of cross effect heterogeneity. The results provide researchers and managers new knowledge about using the two techniques in large category scale settings The computational capabilities required by MVP models grow exponentially with scale and thus are more significantly limited by computational capabilities than are ANN models. In our experiments, for scales 4, 8, 16 and 32, using Nielsen data, MVP models could not be estimated using baskets with 16 and more categories. We attempted to and could calibrate ANN models, on the other hand, for both scales 16 and 32. Surprisingly, the predictive results of ANN models exhibit an inverted U relationship with scale. As an ancillary result we provide a method for determining the existence and extent of non-linear own and cross category effects on likelihood of purchase of a category using ANN models. Besides our empirical studies, we draw on the mental budgeting model and impulsive spending literature, to provide a conceptualization of consumer spending habits as a source of heterogeneity in cross effect context. Finally, after a discussion of conclusions and limitations, the dissertation concludes with a discussion of open questions for future research
    • …
    corecore