205 research outputs found

    Artificial intelligence for throughput bottleneck analysis – State-of-the-art and future directions

    Get PDF
    Identifying, and eventually eliminating throughput bottlenecks, is a key means to increase throughput and productivity in production systems. In the real world, however, eliminating throughput bottlenecks is a challenge. This is due to the landscape of complex factory dynamics, with several hundred machines operating at any given time. Academic researchers have tried to develop tools to help identify and eliminate throughput bottlenecks. Historically, research efforts have focused on developing analytical and discrete event simulation modelling approaches to identify throughput bottlenecks in production systems. However, with the rise of industrial digitalisation and artificial intelligence (AI), academic researchers explored different ways in which AI might be used to eliminate throughput bottlenecks, based on the vast amounts of digital shop floor data. By conducting a systematic literature review, this paper aims to present state-of-the-art research efforts into the use of AI for throughput bottleneck analysis. To make the work of the academic AI solutions more accessible to practitioners, the research efforts are classified into four categories: (1) identify, (2) diagnose, (3) predict and (4) prescribe. This was inspired by real-world throughput bottleneck management practice. The categories, identify and diagnose focus on analysing historical throughput bottlenecks, whereas predict and prescribe focus on analysing future throughput bottlenecks. This paper also provides future research topics and practical recommendations which may help to further push the boundaries of the theoretical and practical use of AI in throughput bottleneck analysis

    Identifying cause-and-effect relationships of manufacturing errors using sequence-to-sequence learning

    Get PDF
    In car-body production the pre-formed sheet metal parts of the body are assembled on fully-automated production lines. The body passes through multiple stations in succession, and is processed according to the order requirements. The timely completion of orders depends on the individual station-based operations concluding within their scheduled cycle times. If an error occurs in one station, it can have a knock-on effect, resulting in delays on the downstream stations. To the best of our knowledge, there exist no methods for automatically distinguishing between source and knock-on errors in this setting, as well as establishing a causal relation between them. Utilizing real-time information about conditions collected by a production data acquisition system, we propose a novel vehicle manufacturing analysis system, which uses deep learning to establish a link between source and knock-on errors. We benchmark three sequence-to-sequence models, and introduce a novel composite time-weighted action metric for evaluating models in this context. We evaluate our framework on a real-world car production dataset recorded by Volkswagen Commercial Vehicles. Surprisingly we find that 71.68% of sequences contain either a source or knock-on error. With respect to seq2seq model training, we find that the Transformer demonstrates a better performance compared to LSTM and GRU in this domain, in particular when the prediction range with respect to the durations of future actions is increased

    Systematic Comparison of Software Agents and Digital Twins: Differences, Similarities, and Synergies in Industrial Production

    Full text link
    To achieve a highly agile and flexible production, it is envisioned that industrial production systems gradually become more decentralized, interconnected, and intelligent. Within this vision, production assets collaborate with each other, exhibiting a high degree of autonomy. Furthermore, knowledge about individual production assets is readily available throughout their entire life-cycles. To realize this vision, adequate use of information technology is required. Two commonly applied software paradigms in this context are Software Agents (referred to as Agents) and Digital Twins (DTs). This work presents a systematic comparison of Agents and DTs in industrial applications. The goal of the study is to determine the differences, similarities, and potential synergies between the two paradigms. The comparison is based on the purposes for which Agents and DTs are applied, the properties and capabilities exhibited by these software paradigms, and how they can be allocated within the Reference Architecture Model Industry 4.0. The comparison reveals that Agents are commonly employed in the collaborative planning and execution of production processes, while DTs typically play a more passive role in monitoring production resources and processing information. Although these observations imply characteristic sets of capabilities and properties for both Agents and DTs, a clear and definitive distinction between the two paradigms cannot be made. Instead, the analysis indicates that production assets utilizing a combination of Agents and DTs would demonstrate high degrees of intelligence, autonomy, sociability, and fidelity. To achieve this, further standardization is required, particularly in the field of DTs.Comment: Manuscript submitted to Journal of Intelligent Manufacturing, Corresponding dataset: https://doi.org/10.5281/zenodo.8120623 Additional references in Sec. 1, some other minor change

    Analysis and Evaluation of the Impacts of Predictive Analytics on Production System Performances in the Semiconductor Industry

    Get PDF
    Problem Statement: Predictive Analytics (PA) may effectively support semiconductor industry (SI) companies in order to manage the special challenges in SI value chains. To discover the implications of PA, the realistic benefits as well as its limitations of its application to semiconductor manufacturing, it is necessary to assess in which ways the application of PA affects the production system (PS) performances. However, based on the literature survey, the influences of PA on the various performance characteristics of an SI PS are not as clear as expected for the efficiently operative application. Besides, the existing performance models are not effective to predict the impacts of PA on the SI PS performances. Therefore, the overall aim of this thesis is to analyse and evaluate the impacts of PA on the SI PS performances and to identify under which conditions a PA application would generate the most significant performance improvements. The focus of this thesis is predictive maintenance (PdM). Research Methodology: Based on a post-positivist philosophy, the thesis applies a deductive research approach using mixed-methods for data collection. The research design has the following stages: (1) theory, (2) hypothesis, (3) state of research, (4) case study and (5) verification. Main Achievements: (1) The systematic literature review is carried out to identify the gaps of the existing research and based on these findings, a conceptual framework is proposed and developed. (2) The existing performance models are analysed and evaluated against their applicability to this study. (3) A causal loop model for SI PS is generated based on the assessment of experts with industrial engineering and equipment maintenance expertise. (4) An expert system is developed and evaluated in order to investigate transitive and contradictory effects of PdM on SI PS performances. (5) A simulation model is developed and validated for investigating the strengths and limitations of PdM regarding SI PS performances under different circumstances. Results: The results of the logical inference study show that PdM has 34 positive effects as well as 4 contradictory effects on SI PS performance characteristics. Based on the various simulation experiments, it has been found that (1) ’Mean Time to Repair’ decreases only if PdM supports proportionate reduction of failures and repair times. (2) Logistics performance improves only if the underlying workcenter is limited in capacity or the four partners are nonsynchronous. (3) PdM supports optimal cost decreases for workcenters where the degree of exhausting wear limits can be most effectively improved and (4) the degree of yield improvement gained by PdM is dependent on the operation scrap rate. However, (5) if a workcenter has overcapacity, PdM will potentially worsen PS performances, even if the particular workcenter performance can be improved. These new insights advance existing knowledge in production managements when adopting predictive technologies at SI PS in order to improve PS performances. The findings above enable SI practitioners to justify a PdM investment and to select suitable workcenters in order to improve SI PS performances by applying the proposed PdM. Contributions: The main contributions of this PhD project can be divided into practical application and theoretical work. The contributions from the theoretical perspective are: 1) The critical review and evaluation of the state of the research for PA in the context of semiconductor manufacturing and the models for predicting and evaluating SI PS performances. 2) A new framework for investigating the implications of PA on the challenges such as gaining high utilizations and controlling the variability in production processes in SI value chains. 3) The new knowledge about transitive and contradictory effects of PdM on SI PS performances, which indicates that PdM can be used to improve PS performances beyond a single machine. 4) The new knowledge about strengths and limitations of PdM in order to improve SI PS performances under particular circumstances. The contributions from the practical application perspective are: 1) A practical method for identifying workcenters where PdM delivers the most significant benefits for SI PS performances. 2) An expert system that provides a comprehensive knowledge base about causes and effects within SI PS in order to justify a PdM investment. 3) A concise review of important PA applications, their capabilities for the wafer fabrication and the most suited PA methods. These findings can be adopted by SI practitioners

    Uses and applications of artificial intelligence in manufacturing

    Get PDF
    The purpose of the THESIS is to provide engineers and personnels with a overview of the concepts that underline Artificial Intelligence and Expert Systems. Artificial Intelligence is concerned with the developments of theories and techniques required to provide a computational engine with the abilities to perceive, think and act, in an intelligent manner in a complex environment. Expert system is branch of Artificial Intelligence where the methods of reasoning emulate those of human experts. Artificial Intelligence derives it\u27s power from its ability to represent complex forms of knowledge, some of it common sense, heuristic and symbolic, and the ability to apply the knowledge in searching for solutions. The Thesis will review : The components of an intelligent system, The basics of knowledge representation, Search based problem solving methods, Expert system technologies, Uses and applications of AI in various manufacturing areas like Design, Process Planning, Production Management, Energy Management, Quality Assurance, Manufacturing Simulation, Robotics, Machine Vision etc. Prime objectives of the Thesis are to understand the basic concepts underlying Artificial Intelligence and be able to identify where the technology may be applied in the field of Manufacturing Engineering

    Analysis of production control methods for semiconductor research and development fabs using simulation

    Get PDF
    The importance of semiconductor device fabrication has been rising steadily over many years. Integrated circuit technology and innovation depends on successful research and development (R&D). R&D establishes the direction for prevailing technology in electronics and computers. To be a leader in the semiconductor industry, a company must bring technology to the market as soon as its application is deemed feasible. Using suitable production control methods for wafer fabrication in R&D fabs ensures reduction in cycle times and planned inventories, which in turn help to more quickly, transfer the new technology to the production fabs, where products are made on a commercial scale. This helps to minimize the time to market. The complex behavior of research fabs produces varying results when conventional production control methodologies are applied. Simulation modeling allows the study of the behavior of the research fab by providing statistical reports on performance measures. The goal of this research is to investigate production control methods in semiconductor R&D fabs. A representative R&D fab is modeled, where an appropriate production load is applied to the fab by using a representative product load. Simulation models are run with different levels of production volume, lot priorities, primary and secondary dispatching strategies and due date tightness as treatment combinations in a formally designed experiment. Fab performance is evaluated based on four performance measures, which include percent on time delivery, average cycle time, standard deviation of cycle time and average work-in-process. Statistical analyses are used to determine the best performing dispatching rules for given fab operating scenarios. Results indicate that the optimal combination of dispatching rules is dependent on specific fab characteristics. However, several dispatching rules are found to be robust across performance measures. A simulation study of the Semiconductor & Microsystems Fabrication Laboratory (SMFL) at the Rochester Institute of Technology (RIT) is used to verify the results

    Elastic computation placement in edge-based environments

    Get PDF
    Today, technologies such as machine learning, virtual reality, and the Internet of Things are integrated in end-user applications more frequently. These technologies demand high computational capabilities. Especially mobile devices have limited resources in terms of execution performance and battery life. The offloading paradigm provides a solution to this problem and transfers computationally intensive parts of applications to more powerful resources, such as servers or cloud infrastructure. Recently, a new computation paradigm arose which exploits the huge amount of end-user devices in the modern computing landscape - called edge computing. These devices encompass smartphones, tablets, microcontrollers, and PCs. In edge computing, devices cooperate with each other while avoiding cloud infrastructure. Due to the proximity among the participating devices, the communication latencies for offloading are reduced. However, edge computing brings new challenges in form of device fluctuation, unreliability, and heterogeneity, which negatively affect the resource elasticity. As a solution, this thesis proposes a computation placement framework that provides an abstraction for computation and resource elasticity in edge-based environments. The design is middleware-based, encompasses heterogeneous platforms, and supports easy integration of existing applications. It is composed of two parts: the Tasklet system and the edge support layer. The Tasklet system is a flexible framework for computation placement on heterogeneous resources. It introduces closed units of computation that can be tailored to generic applications. The edge support layer handles the characteristics of edge resources. It copes with fluctuation and unreliability by applying reactive and proactive task migration. Furthermore, the performance heterogeneity and the consequent bottlenecks are handled by two edge-specific task partitioning approaches. As a proof of concept, the thesis presents a fully-fledged prototype of the design, which is evaluated comprehensively in a real-world testbed. The evaluation shows that the design is able to substantially improve the resource elasticity in edge-based environments

    Decentralized Scheduling Using The Multi-Agent System Approach For Smart Manufacturing Systems: Investigation And Design

    Get PDF
    The advent of industry 4.0 has resulted in increased availability, velocity, and volume of data as well as increased data processing capabilities. There is a need to determine how best to incorporate these advancements to improve the performance of manufacturing systems. The purpose of this research is to present a solution for incorporating industry 4.0 into manufacturing systems. It will focus on how such a system would operate, how to select resources for the system, and how to configure the system. Our proposed solution is a smart manufacturing system that operates as a self-coordinating system. It utilizes a multi-agent system (MAS) approach, where individual entities within the system have autonomy to make dynamic scheduling decisions in real-time. This solution was shown to outperform alternative scheduling strategies (right shifting and dispatching priority rule) in manufacturing environments subject to uncertainty in our simulation experiments. The second phase of our research focused on system design. This phase involved developing models for two problems: (1) resource selection, and (2) layout configuration. Both models developed use simulation-based optimization. We first present a model for determining machine resources using a genetic algorithm (GA). This model yielded results comparable to an exhaustive search whilst significantly reducing the number of required experiments to find the solution. To address layout configuration, we developed a model that combines hierarchical clustering and GA. Our numerical experiments demonstrated that the hybrid layouts derived from the model result in shorter and less variable order completion times compared to alternative layout configurations. Overall, our research showed that MAS-based scheduling can outperform alternative dynamic scheduling approaches in manufacturing environments subject to uncertainty. We also show that this performance can further be improved through optimal resource selection and layout design

    Planning and control of autonomous mobile robots for intralogistics: Literature review and research agenda

    Get PDF
    Autonomous mobile robots (AMR) are currently being introduced in many intralogistics operations, like manufacturing, warehousing, cross-docks, terminals, and hospitals. Their advanced hardware and control software allow autonomous operations in dynamic environments. Compared to an automated guided vehicle (AGV) system in which a central unit takes control of scheduling, routing, and dispatching decisions for all AGVs, AMRs can communicate and negotiate independently with other resources like machines and systems and thus decentralize the decision-making process. Decentralized decision-making allows the system to react dynamically to changes in the system state and environment. These developments have influenced the traditional methods and decision-making processes for planning and control. This study identifies and classifies research related to the planning and control of AMRs in intralogistics. We provide an extended literature review that highlights how AMR technological advances affect planning and control decisions. We contribute to the literature by introducing an AMR planning and control framework t
    • …
    corecore