28 research outputs found

    Development of Expert System by using Logical Comparative Conclusion in the Function of Organizational Performance Improvement

    Get PDF
    The paper is created as the tendency of a group of authors, that with an integrative approach in the application of intelligent systems, by using analogy with some function in human body, to develop a model to improve organizational performance. The paper is based on two unique bases, of a representative number of data that reflect real business conditions and including a significant number of organizations. In order to achieve the set goals, several approaches were adopted, tools and methods, such as analytic hierarchy process (AHP) methodology, case base reasoning, artificial intelligence tools– expert systems, verification in real condition and other. As the key outcomes of the paper are obtained the fields that are critical to achieve the best organizational performance, and indicators for the development of expert systems whose functions, efficiency and effectiveness are verified in real conditions

    Business Intelligence in Industry 4.0: State of the art and research opportunities

    Get PDF
    Data collection and analysis have been at the core of business intelligence (BI) for many years, but traditional BI must be adapted for the large volume of data coming from Industry 4.0 (I4.0) technologies. They generate large amounts of data that need to be processed and used in decision-making to generate value for the companies. Value generation of I4.0 through data analysis and integration into strategic and operational activities is still a new research topic. This study uses a systematic literature review with two objectives in mind: understanding value creation through BI in the context of I4.0 and identifying the main research contributions and gaps. Results show most studies focus on real-time applications and integration of voluminous and unstructured data. For business research, more is needed on business model transformation, methodologies to manage the technological implementation, and frameworks to guide human resources training

    A Data-Driven Condition Monitoring of Product Quality Analysis System Based on RS and AHP

    Get PDF
    Mechanical and electrical products have complex structure and intelligent control system, their reliability plays an important role in the normal operation of security facilities. However, most manufacturers usually pay more attention to the product designing and manufacturing quality, with little interest in the intelligent fault diagnosis. The objective of this study is to develop the products quality intelligent analysis and management system based on Rough Set (RS) and Analytic Hierarchy Process (AHP). Firstly, this paper reviews the principle of hardware, software design, monitoring platform and quality analysis system to reduce the number of information transfer with computer technology. Secondly, the fault types and feature extractions of different faults of elevators are presented and simplified by using RS theory. Then, the objective weight of level index model can be obtained by AHP method, and the comprehensive analysis weight of each index is obtained by using the value of subjective and objective weight coefficients with the golden ratio. Finally, a comprehensive decision weight of the major index for quality control analysis system of many vertical elevators is presented. The results show that the data-driven condition monitoring and quality analysis system is a kind of important means to prevent a disaster of complex mechanical and electrical products

    Least squares support vector machine with self-organizing multiple kernel learning and sparsity

    Get PDF
    © 2018 In recent years, least squares support vector machines (LSSVMs) with various kernel functions have been widely used in the field of machine learning. However, the selection of kernel functions is often ignored in practice. In this paper, an improved LSSVM method based on self-organizing multiple kernel learning is proposed for black-box problems. To strengthen the generalization ability of the LSSVM, some appropriate kernel functions are selected and the corresponding model parameters are optimized using a differential evolution algorithm based on an improved mutation strategy. Due to the large computation cost, a sparse selection strategy is developed to extract useful data and remove redundant data without loss of accuracy. To demonstrate the effectiveness of the proposed method, some benchmark problems from the UCI machine learning repository are tested. The results show that the proposed method performs better than other state-of-the-art methods. In addition, to verify the practicability of the proposed method, it is applied to a real-world converter steelmaking process. The results illustrate that the proposed model can precisely predict the molten steel quality and satisfy the actual production demand

    Frameworks for data-driven quality management in cyber-physical systems for manufacturing: A systematic review

    Get PDF
    Recent advances in the manufacturing industry have enabled the deployment of Cyber-Physical Systems (CPS) at scale. By utilizing advanced analytics, data from production can be analyzed and used to monitor and improve the process and product quality. Many frameworks for implementing CPS have been developed to structure the relationship between the digital and the physical worlds. However, there is no systematic review of the existing frameworks related to quality management in manufacturing CPS. Thus, our study aims at determining and comparing the existing frameworks. The systematic review yielded 38 frameworks analyzed regarding their characteristics, use of data science and Machine Learning (ML), and shortcomings and open research issues. The identified issues mainly relate to limitations in cross-industry/cross-process applicability, the use of ML, big data handling, and data security.publishedVersio

    Métodos machine learning para la predicción de inclusiones no metálicas en alambres de acero para refuerzo de neumáticos

    Get PDF
    ABSTRACT: Non-metallic inclusions are unavoidably produced during steel casting resulting in lower mechanical strength and other detrimental effects. This study was aimed at developing a reliable Machine Learning algorithm to classify castings of steel for tire reinforcement depending on the number and properties of inclusions, experimentally determined. 855 observations were available for training, validation and testing the algorithms, obtained from the quality control of the steel. 140 parameters are monitored during fabrication, which are the features of the analysis; the output is 1 or 0 depending on whether the casting is rejected or not. The following algorithms have been employed: Logistic Regression, K-Nearest Neighbors, Support Vector Classifier (linear and RBF kernels), Random Forests, AdaBoost, Gradient Boosting and Artificial Neural Networks. The reduced value of the rejection rate implies that classification must be carried out on an imbalanced dataset. Resampling methods and specific scores for imbalanced datasets (Recall, Precision and AUC rather than Accuracy) were used. Random Forest was the most successful method providing an AUC in the test set of 0.85. No significant improvements were detected after resampling. The improvement derived from implementing this algorithm in the sampling procedure for quality control during steelmaking has been quantified. In this sense, it has been proved that this tool allows the samples with a higher probability of being rejected to be selected, thus improving the effectiveness of the quality control. In addition, the optimized Random Forest has enabled to identify the most important features, which have been satisfactorily interpreted on a metallurgical basis.RESUMEN: Las inclusiones no metálicas se producen inevitablemente durante la fabricación del acero, lo que resulta en una menor resistencia mecánica y otros efectos perjudiciales. El objetivo de este estudio fue desarrollar un algoritmo fiable para clasificar las coladas de acero de refuerzo de neumáticos en función del número y el tipo de las inclusiones, determinadas experimentalmente. Se dispuso de 855 observaciones para el entrenamiento, validación y test de los algoritmos, obtenidos a partir del control de calidad del acero. Durante la fabricación se controlan 140 parámetros, que son las características del análisis; el resultado es 1 ó 0 dependiendo de si la colada es rechazada o no. Se han empleado los siguientes algoritmos: Regresión Logística, Vecinos K-Cercanos, Clasificador de Vectores Soporte (kernels lineales y RBF), Bosques Aleatorios, AdaBoost, Gradient Boosting y Redes Neurales Artificiales. El bajo índice de rechazo implica que la clasificación debe llevarse a cabo en un set de datos desequilibrado. Se utilizaron métodos de remuestreo y métricas específicas para conjuntos de datos desequilibrados (Recall, Precision y AUC en lugar de Accuracy). Random Forest fue el algoritmo más exitoso que proporcionó un AUC en los datos de test de 0.83. No se detectaron mejoras significativas después del remuestreo. Se ha cuantificado la mejora derivada de la implementación de este algoritmo en el procedimiento de muestreo para el control de calidad durante la fabricación de acero. En este sentido, se ha comprobado que esta herramienta permite seleccionar las muestras con mayor probabilidad de ser rechazadas, mejorando así la eficacia del control de calidad. Además, el Random Forest optimizado ha permitido identificar las variables más importantes, que han sido interpretadas satisfactoriamente sobre una base metalúrgica.Máster en Ciencia de Dato

    Industry 4.0 implementation strategy for Small Medium Enterprises

    Get PDF
    I4.0 implementation strategy is a tool that aids small and medium enterprises to meet the fourth industrial revolution pre-requisites and standards. The main objective of the current research that has been achieved is that it established an industry 4.0 implementation strategy for SMEs, that is capable of providing enterprises with the most effective road map to overcome the obstacles faced by SMEs during transformation and accomplish the fourth industrial revolution’s standards. A roadmap and the implementation strategy will be specifically tailored to the participating enterprise, based on their assessment scores. The implementation strategy requires four consecutive steps including Maturity Assessment, Influence Assessment, Roadmap Construction, and Implementation. An Industry 4.0 implementation strategy has been devised to increase the accuracy of assessing SME’s technological maturity level by providing a weighting factor for relevant implementation dimensions by using an Analytic hierarchy process (AHP). Weight factors were established to identify dimensions that are most influential at small/medium manufacturing enterprises and prioritize their transformation. A total maturity score of the enterprise as a whole valued between 0-100 is determined at the end of the maturity assessment through utilizing radar charts. This research includes a case study that was conducted at SPM Automation Inc., a local small-sized enterprise, where the proposed four-step implementation strategy was conducted and succeeded to measure the current I4.0 maturity score which was 33% and create an implementation strategy that targets the most influential dimensions and prioritize their transformation

    CPS Data Streams Analytics based on Machine Learning for Cloud and Fog Computing: A Survey

    Get PDF
    Cloud and Fog computing has emerged as a promising paradigm for the Internet of things (IoT) and cyber-physical systems (CPS). One characteristic of CPS is the reciprocal feedback loops between physical processes and cyber elements (computation, software and networking), which implies that data stream analytics is one of the core components of CPS. The reasons for this are: (i) it extracts the insights and the knowledge from the data streams generated by various sensors and other monitoring components embedded in the physical systems; (ii) it supports informed decision making; (iii) it enables feedback from the physical processes to the cyber counterparts; (iv) it eventually facilitates the integration of cyber and physical systems. There have been many successful applications of data streams analytics, powered by machine learning techniques, to CPS systems. Thus, it is necessary to have a survey on the particularities of the application of machine learning techniques to the CPS domain. In particular, we explore how machine learning methods should be deployed and integrated in cloud and fog architectures for better fulfilment of the requirements, e.g. mission criticality and time criticality, arising in CPS domains. To the best of our knowledge, this paper is the first to systematically study machine learning techniques for CPS data stream analytics from various perspectives, especially from a perspective that leads to the discussion and guidance of how the CPS machine learning methods should be deployed in a cloud and fog architecture

    A Comparison of Machine Learning and Traditional Demand Forecasting Methods

    Get PDF
    Obtaining accurate forecasts has been a challenging task to achieve for many organizations, both public and private. Today, many firms choose to share their internal information with supply chain partners to increase planning efficiency and accuracy in the hopes of making appropriate critical decisions. However, forecast errors can still increase costs and reduce profits. As company datasets likely contain both trend and seasonal behavior, this motivates the need for computational resources to find the best parameters to use when forecasting their data. In this thesis, two industrial datasets are examined using both traditional and machine learning (ML) forecasting methods. The traditional methods considered are moving average, exponential smoothing, and autoregressive integrated moving average (ARIMA) models, while K-nearest neighbor, random forests, and neural networks were the ML techniques explored. Experimental results confirm the importance of performing a parametric grid search when using any forecasting method, as the output of this process directly determines the effectiveness of each model. In general, ML models are shown to be powerful tools for analyzing industrial datasets
    corecore