3,499 research outputs found

    Design and discrete event simulation of power and free handling systems

    Get PDF
    Effective manufacturing systems design and implementation has become increasingly critical, with the reduction in manufacturing product lead times, and the subsequent influence on engineering projects. Tools and methodologies that can assist the design team must be both manageable and efficient to be successful. Modelling, using analytical and mathematical models, or using computer assisted simulations, are used to accomplish design objectives. This thesis will review the use of analytical and discrete event computer simulation models, applied to the design of automated power and free handling systems, using actual case studies to create and support a practical approach to design and implementation of these types of systems. The IDEF process mapping approach is used to encompass these design tools and system requirements, to recommend a generic process methodology for power and free systems design. The case studies consisted of three actual installations within the Philips Components Ltd facility in Durham, a manufacturer of television tubes. Power and free conveyor systems at PCL have assumed increased functions from the standard conveyor systems, ranging from stock handling and buffering, to type sorting and flexible product routing. In order to meet the demands of this flexible manufacturing strategy, designing a system that can meet the production objectives is critical. Design process activities and engineering considerations for the three projects were reviewed and evaluated, to capture the generic methodologies necessary for future design success. Further, the studies were intended to identify both general and specific criteria for simulating power and free conveyor handling systems, and the ingredients necessary for successful discrete event simulation. The automated handling systems were used to prove certain aspects of building, using and analysing simulation models, in relation to their anticipated benefits, including an evaluation of the factors necessary to ensure their realisation. While there exists a multitude of designs for power and free conveyor systems based on user requirements and proprietary equipment technology, the principles of designing and implementing a system can remain generic. Although specific technology can influence detailed design, a common, consistent approach to design activities was a proven requirement In all cases. Additionally, it was observed that no one design tool was sufficient to ensure maximum system success. A combination of both analytical and simulation methods was necessary to adequately optimise the systems studied, given unique and varying project constraints. It followed that the level of application of the two approaches was directly dependent on the initial engineering project objectives, and the ability to accurately identify system requirements

    Data-driven algorithm for throughput bottleneck analysis of production systems

    Get PDF
    The digital transformation of manufacturing industries is expected to yield increased productivity. Companies collect large volumes of real-time machine data and are seeking new ways to use it in furthering data-driven decision making. A\ua0challenge for these companies is identifying throughput bottlenecks using the real-time machine data they collect. This paper proposes a data-driven algorithm to better identify bottleneck groups and provide diagnostic insights. The algorithm is based on the active period theory of throughput bottleneck analysis. It integrates available manufacturing execution systems (MES) data from the machines and tests the statistical significance of any bottlenecks detected. The algorithm can be automated to allow data-driven decision making on the shop floor, thus improving throughput. Real-world MES datasets were used to develop and test the algorithm, producing research outcomes useful to\ua0manufacturing industries. This research pushes standards in throughput bottleneck analysis, using an interdisciplinary approach based on production and data sciences

    An algorithm for data-driven shifting bottleneck detection

    Get PDF
    Manufacturing companies continuously capture shop floor information using sensors technologies, Manufacturing Execution Systems (MES), Enterprise Resource Planning systems. The volumes of data collected by these technologies are growing and the pace of that growth is accelerating. Manufacturing data is constantly changing but immediately relevant. Collecting and analysing them on a real-time basis can lead to increased productivity. Particularly, prioritising improvement activities such as cycle time improvement, setup time reduction and maintenance activities on bottleneck machines is an important part of the operations management process on the shop floor to improve productivity. The first step in that process is the identification of bottlenecks. This paper introduces a purely data-driven shifting bottleneck detection algorithm to identify the bottlenecks from the real-time data of the machines as captured by MES. The developed algorithm detects the current bottleneck at any given time, the average and the non-bottlenecks over a time interval. The algorithm has been tested over real-world MES data sets of two manufacturing companies, identifying the potentials and the prerequisites of the data-driven method. The main prerequisite of the proposed data-driven method is that all the states of the machine should be monitored by MES during the production run

    IMPROVING OF MANUFACTURING PRODUCTIVITY THROUGH SIMULATION

    Get PDF
    Improvement of manufacturing system is must do process due to development of manufacturing technology and increase in customer needs. Due to development of technology, companies need to do improvement of their current system in order to survive in competition. This study will analyse overall productivity and identified critical process that consider bottleneck. This study also will quantify impact of batch capacity in manufacturing productivity. Computer aided simulation software will be used as main method. Data of manufacturing system will be collected and will be used as input in simulation software.. Altering several parameters such as machines quantity and batch size helps author to studied final output. It helps author reduce time to do trial for new design as simulation software will done based on real time and system performance will be address to help improvise new design. Simulation also can be applied at both the justification phase and design phase. By using this method, critical area can be identified in manufacturing system and explore several solution based on different scenario

    Characterising Enterprise Application Integration Solutions as Discrete-Event Systems

    Get PDF
    It is not difficult to find an enterprise which has a software ecosystem composed of applications that were built using different technologies, data models, operating systems, and most often were not designed to exchange data and share functionalities. Enterprise Application Integration provides methodologies and tools to design and implement integration solutions. The state-of-the-art integration technologies provide a domain-specific language that enables the design of conceptual models for integration solutions. The analysis of integration solutions to predict their behaviour and find possible performance bottlenecks is an important activity that contributes to increase the quality of the delivered solutions, however, software engineers follow a costly, risky, and time-consuming approach. Integration solutions shall be understood as a discrete-event system. This chapter introduces a new approach based on simulation to take advantage of well-established techniques and tools for discrete-event simulation, cutting down cost, risk, and time to deliver better integration solutions

    Sequence-Based Simulation-Optimization Framework With Application to Port Operations at Multimodal Container Terminals

    Get PDF
    It is evident in previous works that operations research and mathematical algorithms can provide optimal or near-optimal solutions, whereas simulation models can aid in predicting and studying the behavior of systems over time and monitor performance under stochastic and uncertain circumstances. Given the intensive computational effort that simulation optimization methods impose, especially for large and complex systems like container terminals, a favorable approach is to reduce the search space to decrease the amount of computation. A maritime port can consist of multiple terminals with specific functionalities and specialized equipment. A container terminal is one of several facilities in a port that involves numerous resources and entities. It is also where containers are stored and transported, making the container terminal a complex system. Problems such as berth allocation, quay and yard crane scheduling and assignment, storage yard layout configuration, container re-handling, customs and security, and risk analysis become particularly challenging. Discrete-event simulation (DES) models are typically developed for complex and stochastic systems such as container terminals to study their behavior under different scenarios and circumstances. Simulation-optimization methods have emerged as an approach to find optimal values for input variables that maximize certain output metric(s) of the simulation. Various traditional and nontraditional approaches of simulation-optimization continue to be used to aid in decision making. In this dissertation, a novel framework for simulation-optimization is developed, implemented, and validated to study the influence of using a sequence (ordering) of decision variables (resource levels) for simulation-based optimization in resource allocation problems. This approach aims to reduce the computational effort of optimizing large simulations by breaking the simulation-optimization problem into stages. Since container terminals are complex stochastic systems consisting of different areas with detailed and critical functions that may affect the output, a platform that accurately simulates such a system can be of significant analytical benefit. To implement and validate the developed framework, a large-scale complex container terminal discrete-event simulation model was developed and validated based on a real system and then used as a testing platform for various hypothesized algorithms studied in this work

    Efficient Data Streaming Analytic Designs for Parallel and Distributed Processing

    Get PDF
    Today, ubiquitously sensing technologies enable inter-connection of physical\ua0objects, as part of Internet of Things (IoT), and provide massive amounts of\ua0data streams. In such scenarios, the demand for timely analysis has resulted in\ua0a shift of data processing paradigms towards continuous, parallel, and multitier\ua0computing. However, these paradigms are followed by several challenges\ua0especially regarding analysis speed, precision, costs, and deterministic execution.\ua0This thesis studies a number of such challenges to enable efficient continuous\ua0processing of streams of data in a decentralized and timely manner.In the first part of the thesis, we investigate techniques aiming at speeding\ua0up the processing without a loss in precision. The focus is on continuous\ua0machine learning/data mining types of problems, appearing commonly in IoT\ua0applications, and in particular continuous clustering and monitoring, for which\ua0we present novel algorithms; (i) Lisco, a sequential algorithm to cluster data\ua0points collected by LiDAR (a distance sensor that creates a 3D mapping of the\ua0environment), (ii) p-Lisco, the parallel version of Lisco to enhance pipeline- and\ua0data-parallelism of the latter, (iii) pi-Lisco, the parallel and incremental version\ua0to reuse the information and prevent redundant computations, (iv) g-Lisco, a\ua0generalized version of Lisco to cluster any data with spatio-temporal locality\ua0by leveraging the implicit ordering of the data, and (v) Amble, a continuous\ua0monitoring solution in an industrial process.In the second part, we investigate techniques to reduce the analysis costs\ua0in addition to speeding up the processing while also supporting deterministic\ua0execution. The focus is on problems associated with availability and utilization\ua0of computing resources, namely reducing the volumes of data, involving\ua0concurrent computing elements, and adjusting the level of concurrency. For\ua0that, we propose three frameworks; (i) DRIVEN, a framework to continuously\ua0compress the data and enable efficient transmission of the compact data in the\ua0processing pipeline, (ii) STRATUM, a framework to continuously pre-process\ua0the data before transferring the later to upper tiers for further processing, and\ua0(iii) STRETCH, a framework to enable instantaneous elastic reconfigurations\ua0to adjust intra-node resources at runtime while ensuring determinism.The algorithms and frameworks presented in this thesis contribute to an\ua0efficient processing of data streams in an online manner while utilizing available\ua0resources. Using extensive evaluations, we show the efficiency and achievements\ua0of the proposed techniques for IoT representative applications that involve a\ua0wide spectrum of platforms, and illustrate that the performance of our work\ua0exceeds that of state-of-the-art techniques

    Manufacturing Management and Decision Support using Simulation-based Multi-Objective Optimisation

    Get PDF
    A majority of the established automotive manufacturers are under severe competitive pressure and their long term economic sustainability is threatened. In particular the transformation towards more CO2-efficient energy sources is a huge financial burden for an already investment capital intensive industry. In addition existing operations urgently need rapid improvement and even more critical is the development of highly productive, efficient and sustainable manufacturing solutions for new and updated products. Simultaneously, a number of severe drawbacks with current improvement methods for industrial production systems have been identified. In summary, variation is not considered sufficient with current analysis methods, tools used are insufficient for revealing enough knowledge to support decisions, procedures for finding optimal solutions are not considered, and information about bottlenecks is often required, but no accurate methods for the identification of bottlenecks are used in practice, because they do not normally generate any improvement actions. Current methods follow a trial-and-error pattern instead of a proactive approach. Decisions are often made directly on the basis of raw static historical data without an awareness of optimal alternatives and their effects. These issues could most likely lead to inadequate production solutions, low effectiveness, and high costs, resulting in poor competitiveness. In order to address the shortcomings of existing methods, a methodology and framework for manufacturing management decision support using simulation-based multi-objective optimisation is proposed. The framework incorporates modelling and the optimisation of production systems, costs, and sustainability. Decision support is created through the extraction of knowledge from optimised data. A novel method and algorithm for the detection of constraints and bottlenecks is proposed as part of the framework. This enables optimal improvement activities with ranking in order of importance can be sought. The new method can achieve a higher improvement rate, when applied to industrial improvement situations, compared to the well-established shifting bottleneck technique. A number of “laboratory” experiments and real-world industrial applications have been conducted in order to explore, develop, and verify the proposed framework. The identified gaps can be addressed with the proposed methodology. By using simulation-based methods, stochastic behaviour and variability is taken into account and knowledge for the creation of decision support is gathered through post-optimality analysis. Several conflicting objectives can be considered simultaneously through the application of multi-objective optimisation, while objectives related to running cost, investments and other sustainability parameters can be included through the use of the new cost and sustainability models introduced. Experiments and tests have been undertaken and have shown that the proposed framework can assist the creation of manufacturing management decision support and that such a methodology can contribute significantly to regaining profitability when applied within the automotive industry. It can be concluded that a proof-of-concept has been rigorously established for the application of the proposed framework on real-world industrial decision-making, in a manufacturing management context.Volvo Car Corporation, Sweden University of Skövde, Swede

    Constraint analysis and throughput improvement at an automotive assembly plant

    Get PDF
    Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2003.Includes bibliographical references (leaves 93-94).by José Leoncio Valdés R.S.M.M.B.A

    Working Notes from the 1992 AAAI Spring Symposium on Practical Approaches to Scheduling and Planning

    Get PDF
    The symposium presented issues involved in the development of scheduling systems that can deal with resource and time limitations. To qualify, a system must be implemented and tested to some degree on non-trivial problems (ideally, on real-world problems). However, a system need not be fully deployed to qualify. Systems that schedule actions in terms of metric time constraints typically represent and reason about an external numeric clock or calendar and can be contrasted with those systems that represent time purely symbolically. The following topics are discussed: integrating planning and scheduling; integrating symbolic goals and numerical utilities; managing uncertainty; incremental rescheduling; managing limited computation time; anytime scheduling and planning algorithms, systems; dependency analysis and schedule reuse; management of schedule and plan execution; and incorporation of discrete event techniques
    corecore