26 research outputs found

    Framework for the usage of data from real-time indoor localization systems to derive inputs for manufacturing simulation

    Get PDF
    Discrete event simulation is becoming increasingly important in the planning and operation of complex manufacturing systems. A major problem with today’s approach to manufacturing simulation studies is the collection and processing of data from heterogeneous sources, because the data is often of poor quality and does not contain all the necessary information for a simulation. This work introduces a framework that uses a real-time indoor localization systems (RTILS) as a central main data harmonizer, that is designed to feed production data into a manufacturing simulation from a single source of truth. It is shown, based on different data quality dimensions, how this contributes to a better overall data quality in manufacturing simulation. Furthermore, a detailed overview on which simulation inputs can be derived from the RTILS data is given

    A Methodology for Continuous Quality Assurance of Production Data

    Get PDF
    High quality input data is a necessity for successful Discrete Event Simulation (DES) applications, and there are available methodologies for data collection in DES projects. However, in contrast to standalone projects, using DES as a day-to-day engineering tool requires high quality production data to be constantly available. Unfortunately, there are no detailed guidelines that describes how to achieve this. Therefore, this paper presents such a methodology, based on three concurrent engineering projects within the automotive industry. The methodology explains the necessary roles, responsibilities, meetings, and documents to achieve a continuous quality assurance of production data. It also specifies an approach to input data management for DES using the Generic Data Management Tool (GDM-Tool). The expected effects are increased availability of high quality production data and reduced lead time of input data management, especially valuable in manufacturing companies having advanced automated data collection methods and using DES on a daily basis

    Waste reduction in production processes through simulation and VSM

    Get PDF
    Corporate managers often face the need to choose the optimal configurations of production processes to reduce waste. Research has shown that simulation is an effective tool among those conceived to support the manager's decisions. Nevertheless, the use of simulation at the company level remains limited due to the complexity in the design phase. In this context, the Value Stream Map (VSM)-a tool of the Lean philosophy-is here exploited as a link between the strategic needs of the management and the operational aspect of the simulation process in order to approach sustainability issues. The presented approach is divided into two main parts: a set of criteria for expanding the VSM are identified in order to increase the level of details of the represented processes; then, data categories required for the inputs and outputs of each sub-process modeling are defined, including environmental indicators. Specifically, an extended version of the classical VSM (X-VSM), conceived to support process simulation, is here proposed: the X-VSM is used to guide the design of the simulation so that the management decisions, in terms of waste reduction, can be easily evaluated. The proposal was validated on a production process of a large multinational manufacturing company

    Data quality problems in discrete event simulation of manufacturing operations

    Get PDF
    High-quality input data are a necessity for successful discrete event simulation (DES) applications, and there are available methodologies for data collection in DES projects. However, in contrast to standalone projects, using DES as a daily manufacturing engineering tool requires high-quality production data to be constantly available. In fact, there has been a major shift in the application of DES in manufacturing from production system design to daily operations, accompanied by a stream of research on automation of input data management and interoperability between data sources and simulation models. Unfortunately, this research stream rests on the assumption that the collected data are already of high quality,and there is a lack of in-depth understanding of simulation data quality problems from a practitioners’ perspective.Therefore, a multiple-case study within the automotive industry was used to provide empirical descriptions of simulation data quality problems, data production processes, and relations between these processes and simulation data quality problems. These empirical descriptions are necessary to extend the present knowledge on data quality in DES in a practical real-world manufacturing context, which is a prerequisite for developing practical solutions for solving data quality problems such as limited accessibility, lack of data on minor stoppages, and data sources not being designed for simulation. Further, the empirical and theoretical knowledge gained throughout the study was used to propose a set of practical guidelines that can support manufacturing companies in improving data quality in DES

    An Integrated Framework for Automated Data Collection and Processing for Discrete Event Simulation Models

    Get PDF
    Discrete Events Simulation (DES) is a powerful tool of modeling and analysis used in different disciplines. DES models require data in order to determine the different parameters that drive the simulations. The literature about DES input data management indicates that the preparation of necessary input data is often a highly manual process, which causes inefficiencies, significant time consumption and a negative user experience. The focus of this research investigation is addressing the manual data collection and processing (MDCAP) problem prevalent in DES projects. This research investigation presents an integrated framework to solve the MDCAP problem by classifying the data needed for DES projects into three generic classes. Such classification permits automating and streamlining the preparation of the data, allowing DES modelers to collect, update, visualize, fit, validate, tally and test data in real-time, by performing intuitive actions. In addition to the proposed theoretical framework, this project introduces an innovative user interface that was programmed based on the ideas of the proposed framework. The interface is called DESI, which stands for Discrete Event Simulation Inputs. The proposed integrated framework to automate DES input data preparation was evaluated against benchmark measures presented in the literature in order to show its positive impact in DES input data management. This research investigation demonstrates that the proposed framework, instantiated by the DESI interface, addresses current gaps in the field, reduces the time devoted to input data management within DES projects and advances the state-of-the-art in DES input data management automation

    Engine Load Prediction during Take-Off for the V2500 Engine

    Get PDF
    The aviation industry faces an ever increasing pressure to reduce its cost in order to gain competitive advantages. Since aircraft maintenance contributes strongly with about 17% to the overall direct operating cost (DOC), maintenance providers are required to continuously reduce their cost share as well. As a result, a lot of effort is put into the exploitation of the potential of emerging digitalization technologies to predict upcoming system faults and, therefore, reduce the projected maintenance impact. The detection of early stage faults and prediction of remaining useful lifetimes (RUL) for various systems, including aircraft engines as high-value assets, has been a focal point for many research activities already. A key aspect – necessary for an accurate prediction of future behavior – is the correct mapping of ambient conditions that have led to the respective system condition. Therefore, it is necessary to combine data information throughout an aircraft’s life from different stakeholders to gain valuable insights. However, as the aviation industry is strongly segregated with many parties involved, trying to gain their own competitive advantage, the required information about the operating condition is often not available to independent maintenance providers. Thus, modeling engine degradation often needs to rely on estimated nominal conditions, limiting the ability to precisely predict engine faults. With this paper, we will develop a model that allows users to estimate the experienced engine load during take-off by only using publicly available information, i.e. airport weather information reports and public flight data. The calculated engine load factors are computed in terms of an engine pressure ratio (EPR) derate. The results are benchmarked with the actual engine derate, obtained for different operators and various ambient conditions, to enable an identification of challenges for the load prediction and areas of improvement. The developed model will help to adjust engine failure projections according to the experienced ambient conditions and, therefore, supports the development of better engine degradation models

    Online Simulation in Semiconductor Manufacturing

    Get PDF
    In semiconductor manufacturing discrete event simulation systems are quite established to support multiple planning decisions. During the recent years, the productivity is increasing by using simulation methods. The motivation for this thesis is to use online simulation not only for planning decisions, but also for a wide range of operational decisions. Therefore an integrated online simulation system for short term forecasting has been developed. The production environment is a mature high mix logic wafer fab. It has been selected because of its vast potential for performance improvement. In this thesis several aspects of online simulation will be addressed: The first aspect is the implementation of an online simulation system in semiconductor manufacturing. The general problem is to achieve a high speed, a high level of detail, and a high forecast accuracy. To resolve these problems, an online simulation system has been created. The simulation model has a high level of detail. It is created automatically from underling fab data. To create such a simulation model from fab data, additional problems related to the underlying data arise. The major parts are the data access, the data integration, and the data quality. These problems have been solved by using an integrated data model with several data extraction, data transformation, and data cleaning steps. The second aspect is related to the accuracy of online simulation. The overall problem is to increase the forecast horizon, increase the level of detail of the forecast and reduce the forecast error. To provide useful forecast results, the simulation model contains a high level of modeling details and a proper initialization. The influences on the forecast quality will be analyzed. The results show that the simulation forecast accuracy achieves good quality to predict future fab performance. The last aspect is to find ways to use simulation forecast results to improve the fab performance. Numerous applications have been identified. For each application a description is available. It contains the requirements of such a forecast, the decision variables, and background information. An application example shows, where a performance problem exists and how online simulation is able to resolve it. To further enhance the real time capability of online simulation, a major part is to investigate new ways to connect the simulation model with the wafer fab. For fab driven simulation, the simulation model and the real wafer fab run concurrently. The wafer fab provides several events to update the simulation during runtime. So the model is always synchronized with the real fab. It becomes possible to start a simulation run in real time. There is no further delay for data extraction, data transformation and model creation. A prototype for a single work center has been implemented to show the feasibility

    Digital Twins of production systems - Automated validation and update of material flow simulation models with real data

    Get PDF
    Um eine gute Wirtschaftlichkeit und Nachhaltigkeit zu erzielen, müssen Produktionssysteme über lange Zeiträume mit einer hohen Produktivität betrieben werden. Dies stellt produzierende Unternehmen insbesondere in Zeiten gesteigerter Volatilität, die z.B. durch technologische Umbrüche in der Mobilität, sowie politischen und gesellschaftlichen Wandel ausgelöst wird, vor große Herausforderungen, da sich die Anforderungen an das Produktionssystem ständig verändern. Die Frequenz von notwendigen Anpassungsentscheidungen und folgenden Optimierungsmaßnahmen steigt, sodass der Bedarf nach Bewertungsmöglichkeiten von Szenarien und möglichen Systemkonfigurationen zunimmt. Ein mächtiges Werkzeug hierzu ist die Materialflusssimulation, deren Einsatz aktuell jedoch durch ihre aufwändige manuelle Erstellung und ihre zeitlich begrenzte, projektbasierte Nutzung eingeschränkt wird. Einer längerfristigen, lebenszyklusbegleitenden Nutzung steht momentan die arbeitsintensive Pflege des Simulationsmodells, d.h. die manuelle Anpassung des Modells bei Veränderungen am Realsystem, im Wege. Das Ziel der vorliegenden Arbeit ist die Entwicklung und Umsetzung eines Konzeptes inkl. der benötigten Methoden, die Pflege und Anpassung des Simulationsmodells an die Realität zu automatisieren. Hierzu werden die zur Verfügung stehenden Realdaten genutzt, die aufgrund von Trends wie Industrie 4.0 und allgemeiner Digitalisierung verstärkt vorliegen. Die verfolgte Vision der Arbeit ist ein Digitaler Zwilling des Produktionssystems, der durch den Dateninput zu jedem Zeitpunkt ein realitätsnahes Abbild des Systems darstellt und zur realistischen Bewertung von Szenarien verwendet werden kann. Hierfür wurde das benötigte Gesamtkonzept entworfen und die Mechanismen zur automatischen Validierung und Aktualisierung des Modells entwickelt. Im Fokus standen dabei unter anderem die Entwicklung von Algorithmen zur Erkennung von Veränderungen in der Struktur und den Abläufen im Produktionssystem, sowie die Untersuchung des Einflusses der zur Verfügung stehenden Daten. Die entwickelten Komponenten konnten an einem realen Anwendungsfall der Robert Bosch GmbH erfolgreich eingesetzt werden und führten zu einer Steigerung der Realitätsnähe des Digitalen Zwillings, der erfolgreich zur Produktionsplanung und -optimierung eingesetzt werden konnte. Das Potential von Lokalisierungsdaten für die Erstellung von Digitalen Zwillingen von Produktionssystem konnte anhand der Versuchsumgebung der Lernfabrik des wbk Instituts für Produktionstechnik demonstriert werden

    An Investigation into the Data Collection Process for the Development of Cost Models

    Get PDF
    This thesis is the result of many years of research in the field of manufacturing cost modelling. It particularly focuses on the Data Collection Process for the development of manufacturing cost models in the UK Aerospace Industry with no less important contributions from other areas such as construction, process and software development. The importance of adopting an effective model development process is discussed and a new CMD Methodology is proposed. In this respect, little research has considered the development of the cost model from the point of view of a standard and systematic Methodology, which is essential if an optimum process is to be achieved. A Model Scoping 3 Framework, a functional Data Source and Data Collection Library and a referential Data Type Library are the core elements of the proposed Cost Model Development Methodology. The research identified a number of individual data collection methods, along with a comprehensive list of data sources and data types, from which essential data for developing cost models could be collected. A Taxonomy based upon sets of generic characteristics for describing the individual data collection, data sources and data types was developed. The methods, tools and techniques were identified and categorised according to these generic characteristics. This provides information for selecting between alternative methods, tools and techniques. The need to perform frequent iterations of data collection, data identification, data analysis and decision making tasks until an acceptable cost model has been developed has become an inherent feature of the CMDP. It is expected that the proposed model scoping framework will assist cost engineering and estimating practitioners in: defining the features, activities of the process and the attributes of the product for which a cost model is required, and also in identifying the cost model characteristics before the tasks of data identification and collection start. It offers a structured way of looking at the relationship between data sources, cost model characteristics and data collection tools and procedures. The aim was to make the planning process for developing cost models more effective and efficient and consequently reduce the time to generate cost models

    Optimising cardiac services using routinely collected data and discrete event simulation

    Get PDF
    Background: The current practice of managing hospital resources, including beds, is very much driven by measuring past or expected utilisation of resources. This practice, however, doesn’t reflect variability among patients. Consequently, managers and clinicians cannot make fully informed decisions based upon these measures which are considered inadequate in planning and managing complex systems. Aim: to analyse how variation related to patient conditions and adverse events affect resource utilisation and operational performance. Methods: Data pertaining to cardiac patients (cardiothoracic and cardiology, n=2241) were collected from two major hospitals in Oman. Factors influential to resource utilisation were assessed using logistic regressions. Other analysis related to classifying patients based on their resource utilisation was carried out using decision tree to assist in predicting hospital stay. Finally, discrete event simulation modelling was used to evaluate how patient factors and postoperative complications are affecting operational performance. Results: 26.5% of the patients experienced prolonged Length of Stay (LOS) in intensive care units and 30% in the ward. Patients with prolonged postoperative LOS had 60% of the total patient days. Some of the factors that explained the largest amount of variance in resource use following cardiac procedure included body mass index, type of surgery, Cardiopulmonary Bypass (CPB) use, non-elective surgery, number of complications, blood transfusion, chronic heart failure, and previous angioplasty. Allocating resources based on patient expected LOS has resulted in a reduction of surgery cancellations and waiting times while overall throughput has increased. Complications had a significant effect on perioperative operational performance such as surgery cancellations. The effect was profound when complications occurred in the intensive care unit where a limited capacity was observed. Based on the simulation model, eliminating some complications can enlarge patient population. Conclusion: Integrating influential factors into resource planning through simulation modelling is an effective way to estimate and manage hospital capacity.Open Acces
    corecore