33 research outputs found

    Predicting cycle time distributions for integrated processing workstations : an aggregate modeling approach

    Get PDF
    Predicting the cycle time distribution as a function of throughput is helpful in making a trade-off between workstation productivity and meeting due dates. To predict cycle time distributions, detailed models are almost exclusively used, which require considerable development and maintenance effort. Instead, we propose a so-called aggregate model to predict cycle time distributions, which is a lumped-parameter representation of the queueing system. The lumped parameters of the model are determined directly from arrival and departure events measured at the workstation. The paper demonstrates that the aggregate model can accurately predict the cycle time distribution of workstations in semiconductor manufacturing, in particular the tail of the distributio

    EPT and flowtime distributions

    Get PDF

    Aggregate modeling in semiconductor manufacturing using effective process times

    Get PDF
    In modern manufacturing, model-based performance analysis is becoming increasingly important due to growing competition and high capital investments. In this PhD project, the performance of a manufacturing system is considered in the sense of throughput (number of products produced per time unit), cycle time (time that a product spends in a manufacturing system), and the amount of work in process (amount of products in the system). The focus of this project is on semiconductor manufacturing. Models facilitate in performance improvement by providing a systematic connection between operational decisions and performance measures. Two common model types are analytical models, and discrete-event simulation models. Analytical models are fast to evaluate, though incorporation of all relevant factory-fl oor aspects is difficult. Discrete-event simulation models allow for the inclusion of almost any factory-fl oor aspect, such that a high prediction accuracy can be achieved. However, this comes at the cost of long computation times. Furthermore, data on all the modeled aspects may not be available. The number of factory-fl oor aspects that have to be modeled explicitly can be reduced signiffcantly through aggregation. In this dissertation, simple aggregate analytical or discrete-event simulation models are considered, with only a few parameters such as the mean and the coeffcient of variation of an aggregated process time distribution. The aggregate process time lumps together all the relevant aspects of the considered system, and is referred to as the Effective Process Time (EPT) in this dissertation. The EPT may be calculated from the raw process time and the outage delays, such as machine breakdown and setup. However, data on all the outages is often not available. This motivated previous research at the TU/e to develop algorithms which can determine the EPT distribution directly from arrival and departure times, without quantifying the contributing factors. Typical for semiconductor machines is that they often perform a sequence of processes in the various machine chambers, such that wafers of multiple lots are in process at the same time. This is referred to as \lot cascading". To model this cascading behavior, in previous work at the TU/e an aggregate model was developed in which the EPT depends on the amount of Work In Process (WIP). This model serves as the starting point of this dissertation. This dissertation presents the efforts to further develop EPT-based aggregate modeling for application in semiconductor manufacturing. In particular, the dissertation contributes to: dealing with the typically limited amount of available data, modeling workstations with a variable product mix, predicting cycle time distributions, and aggregate modeling of networks of workstations. First, the existing aggregate model with WIP-dependent EPTs has been extended with a curve-fitting approach to deal with the limited amount of arrivals and departures that can be collected in a realistic time period. The new method is illustrated for four operational semiconductor workstations in the Crolles2 semiconductor factory (in Crolles, France), for which the mean cycle time as a function of the throughput has been predicted. Second, a new EPT-based aggregate model that predicts the mean cycle time of a workstation as a function of the throughput, and the product mix has been developed. In semiconductor manufacturing, many workstations produce a mix of different products, and each machine in the workstation may be qualified to process a subset of these products only. The EPT model is validated on a simulation case, and on an industry case of an operational Crolles2 workstation. Third, the dissertation presents a new EPT-based aggregate model that can predict the cycle time distribution of a workstation instead of only the mean cycle time. To accurately predict a cycle time distribution, the order in which lots are processed is incorporated in the aggregate model by means of an overtaking distribution. An extensive simulation study and an industry case demonstrate that the aggregate model can accurately predict the cycle time distribution of integrated processing workstations in semiconductor manufacturing. Finally, aggregate modeling of networks of semiconductor workstations has been explored. Two modeling approaches are investigated: the entire network is modeled as a single aggregate server, and the network is modeled as an aggregate network that consists of an aggregate model for each workstation. The accuracy of the model predictions using the two approaches is investigated by means of a simulation case of a re-entrant ow line. The results of these aggregate models are promising

    Increasing the reliability and applicability of measurement-based probabilistic timing analysis

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia de Automação e Sistemas, Florianópolis, 2019Conforme a complexidade das arquiteturas computacionais aumenta para melhorar desempenho ou reduzir custos, o uso de processadores modernos em Sistemas de Tempo Real (STRs) é prejudicado cada vez mais pelo surgimento de efeitos temporais que dificultam a obtenção de limites confiáveis e precisos para os Worst-Case Execution Times (WCETs) de tarefas. A Análise Temporal Probabilística Baseada em Medições (ATPBM) visa determinar limites probabilísticos de WCET (i.e. pWCETs) aplicando a Teoria de Valores Extremos (TVE) sobre medições de tempos de execução, e é portanto promissora no tratamento da complexidade de hardware no projeto de STRs. Processadores temporalmente aleatorizados foram recentemente propostos para tornar o comportamento temporal de sistemas computacionais mais facilmente analisável através de ferramental probabilístico, e são projetados substituindo informações internas determinísticas ou especulativas por números (pseudo-)aleatórios. A pesquisa cujos resultados são apresentados nesta tese produziu contribuições em duas frentes distintas. Em primeiro lugar, foram propostos e aplicados métodos para avaliar a confiabilidade dos pWCETs produzidos pela ATPBM, baseados na coleta de grandes amostras de tempos de execução e na comparação (1) dos pWCETs com os maiores tempos de execução observados, e (2) das densidades de excedência dos pWCETs com seus valores esperados. Essas avaliações indicaram que modelos probabilísticos da TVE projetados para gerar margens mais precisas podem muitas vezes levar a subestimativas de pWCETs, e recomendou-se então que modelos sobrestimadores devem ser utilizados para obter-se pWCETs mais confiáveis. Em segundo lugar, avaliou-se a hipótese de que técnicas de escalonamento aleatorizado podem beneficiar a análise temporal de tarefas executadas em pipelines multithread através da ATPBM, por levarem os tempos de execução produzidos a atenderem às premissas básicas de aplicabilidade da técnica. Para tal, foram considerados tanto (A) um escalonador puramente aleatório, quanto (B) um escalonador aleatorizado capaz de limitar os efeitos temporais da interferência entre threads, sem comprometer sua analisabilidade pela ATPBM, através de um mecanismo de regulação de elegibilidade baseado em créditos.Abstract: As the complexity of computer architectures grows in order to improve performance and/or to reduce costs, the use of modern processors in the design of Real-Time Systems (RTSs) is increasingly hampered by the emergence of timing effects that challenge determining reliable and tight bounds for tasks' Worst-Case Execution Times (WCETs). The Measurement-Based Probabilistic Timing Analysis (MBPTA) technique aims determining probabilistic WCET bounds (i.e. pWCETs) by applying Extreme Value Theory (EVT) on tasks' execution time measurements, and is hence promising in handling hardware complexity issues within RTSs' design. Hardware-level time-randomized processors were recently proposed as a means to cause computing systems' timing behaviour to become more easily analysable through probabilistic tools, and are designed replacing deterministic or speculative internal information with (pseudo-)random numbers. The scientific research whose outcomes are presented in this thesis produced contributions on two distinct fronts. In first place, we proposed and applied methods for evaluating the reliability of pWCET estimates produced using MBPTA, based on collecting large execution time samples and then comparing (1) the pWCETs against the largest observed execution times, and (2) pWCETs' exceedance densities against their expected values. These evaluations led us to conclude that EVT probabilistic models intended to yield more precise bounds may often lead to pWCET underestimations, and we hence recommended that upper-bounding models should instead be used for deriving pWCETs with increased reliability. In second place, we evaluated the hypothesis that randomized scheduling techniques can benefit the timing analysis of tasks executed on multithread pipelines through MBPTA, by causing the yielded execution times to meet the technique's basic application requirements. For that, we considered both (A) a scheduler that employs a purely random policy, and (B) a randomized scheduler capable of limiting the timing effects of inter-thread interference, without compromising analysability, by using a credit-based eligibility regulation mechanism

    Modelling of interactions between rail service and travel demand: a passenger-oriented analysis

    Get PDF
    The proposed research is situated in the field of design, management and optimisation in railway network operations. Rail transport has in its favour several specific features which make it a key factor in public transport management, above all in high-density contexts. Indeed, such a system is environmentally friendly (reduced pollutant emissions), high-performing (high travel speeds and low values of headways), competitive (low unitary costs per seat-km or carried passenger-km) and presents a high degree of adaptability to intermodality. However, it manifests high vulnerability in the case of breakdowns. This occurs because a faulty convoy cannot be easily overtaken and, sometimes, cannot be easily removed from the line, especially in the case of isolated systems (i.e. systems which are not integrated into an effective network) or when a breakdown occurs on open tracks. Thus, re-establishing ordinary operational conditions may require excessive amounts of time and, as a consequence, an inevitable increase in inconvenience (user generalised cost) for passengers, who might decide to abandon the system or, if already on board, to exclude the railway system from their choice set for the future. It follows that developing appropriate techniques and decision support tools for optimising rail system management, both in ordinary and disruption conditions, would consent a clear influence of the modal split in favour of public transport and, therefore, encourage an important reduction in the externalities caused by the use of private transport, such as air and noise pollution, traffic congestion and accidents, bringing clear benefits to the quality of life for both transport users and non-users (i.e. individuals who are not system users). Managing to model such a complex context, based on numerous interactions among the various components (i.e. infrastructure, signalling system, rolling stock and timetables) is no mean feat. Moreover, in many cases, a fundamental element, which is the inclusion of the modelling of travel demand features in the simulation of railway operations, is neglected. Railway transport, just as any other transport system, is not finalised to itself, but its task is to move people or goods around, and, therefore, a realistic and accurate cost-benefit analysis cannot ignore involved flows features. In particular, considering travel demand into the analysis framework presents a two-sided effect. Primarily, it leads to introduce elements such as convoy capacity constraints and the assessment of dwell times as flow-dependent factors which make the simulation as close as possible to the reality. Specifically, the former allows to take into account the eventuality that not all passengers can board the first arriving train, but only a part of them, due to overcrowded conditions, with a consequent increase in waiting times. Due consideration of this factor is fundamental because, if it were to be repeated, it would make a further contribution to passengers’ discontent. While, as regards the estimate of dwell times on the basis of flows, it becomes fundamental in the planning phase. In fact, estimating dwell times as fixed values, ideally equal for all runs and all stations, can induce differences between actual and planned operations, with a subsequent deterioration in system performance. Thus, neglecting these aspects, above all in crowded contexts, would render the simulation distorted, both in terms of costs and benefits. The second aspect, on the other hand, concerns the correct assessment of effects of the strategies put in place, both in planning phases (strategic decisions such as the realisation of a new infrastructure, the improvement of the current signalling system or the purchasing of new rolling stock) and in operational phases (operational decisions such as the definition of intervention strategies for addressing disruption conditions). In fact, in the management of failures, to date, there are operational procedures which are based on hypothetical times for re-establishing ordinary conditions, estimated by the train driver or by the staff of the operation centre, who, generally, tend to minimise the impact exclusively from the company’s point of view (minimisation of operational costs), rather than from the standpoint of passengers. Additionally, in the definition of intervention strategies, passenger flow and its variation in time (different temporal intervals) and space (different points in the railway network) are rarely considered. It appears obvious, therefore, how the proposed re-examination of the dispatching and rescheduling tasks in a passenger-orientated perspective, should be accompanied by the development of estimation and forecasting techniques for travel demand, aimed at correctly taking into account the peculiarities of the railway system; as well as by the generation of ad-hoc tools designed to simulate the behaviour of passengers in the various phases of the trip (turnstile access, transfer from the turnstiles to the platform, waiting on platform, boarding and alighting process, etc.). The latest workstream in this present study concerns the analysis of the energy problems associated to rail transport. This is closely linked to what has so far been described. Indeed, in order to implement proper energy saving policies, it is, above all, necessary to obtain a reliable estimate of the involved operational times (recovery times, inversion times, buffer times, etc.). Moreover, as the adoption of eco-driving strategies generates an increase in passenger travel times, with everything that this involves, it is important to investigate the trade-off between energy efficiency and increase in user generalised costs. Within this framework, the present study aims at providing a DSS (Decision Support System) for all phases of planning and management of rail transport systems, from that of timetabling to dispatching and rescheduling, also considering space-time travel demand variability as well as the definition of suitable energy-saving policies, by adopting a passenger-orientated perspective

    Simulation Modeling of Prehospital Trauma Care

    Get PDF
    Prehospital emergency care systems are complex and do not necessarily respond predictably to changes in management. A combined discrete-continuous simulation model focusing on trauma care was designed and implemented in SIMSCRIPT II.5 to allow prediction of the systems response to policy changes in terms of its effect on the system and on patient survival. The utility of the completed model was demonstrated by the results of experiments on triage and helicopter dispatching policies. Experiments on current and two alternate triage policies showed that helicopter utilization is significantly increased by more liberal triage to Level 1 trauma centers, which was expected, but that the waiting time for pending accidents tended to decrease, an unexpected consequence. Experiments on helicopter dispatch policy showed that liberalization of the dispatch policy would have much greater consequences than would changing the triage criteria. Again, this result was unexpected and has received little attention from system planners and administrators, especially with respect to the degree of discussion and controversy surrounding triage criteria

    Integrating artificial neural networks, simulation and optimisation techniques in improving public emergency ambulance preparedness for heterogeneous regions under stochastic environments.

    Get PDF
    Doctoral Degree. University of KwaZulu-Natal, Pietermaritzburg.The Bulawayo Emergency Medical Services (BEMS) department continues to rely on judgemental methods with limited use of historical data for future predictions, strategic, tactical and operational level decision making. The rural to urban migration trend has seen the sprouting of new residential areas, and this has put pressure to the limited health, housing and education resources. It is expected that as population increases, there is subsequent increase in demand for public emergency services. However, public emergency ambulance demand trends has been decreasing in Bulawayo over the years. This trend is a sign of limited capacity of the service rather than demand itself. The situation demanded for consolidated efforts across all sectors including research, to restore confidence among residents, reduce health risk and loss of lives. The key objective was to develop a framework that would assist in integrating forecasting, simulation and optimisation techniques for ambulance deployment to predefined locations with heterogeneous demand patterns under stochastic environments, using multiple performance indicators. Secondary data from the Bulawayo Municipality archives from 2010 to 2018 was used for model building and validation. A combination of methods based on mathematics, statistics, operations research and computer science were used for data analysis, model building, sensitivity analysis and numerical experiments. Results indicate that feed forward neural network (FFNN) models are superior to traditional SARIMA models in predicting ambulance demand, over a short-term forecasting horizon. The FFNN model is more inclined to value estimation as compared to SARIMA model, which is directional as depicted by the linear pattern over time. An ANN model with a 7-(4)-1 architecture was selected to forecast 2019 public emergency ambulance demand (PEAD). Peak PEAD is expected in January, March, September and December whilst lower demand is expected for April, June and July 2019. Simulation models developed mimicked the prevailing levels of service for BEMS with six(6) operational ambulances. However. the average response times were well above 15 minutes, with significantly high average queuing times and number of ambulances queuing for service. These performance outcomes were highly undesirable as they pose a great threat to human based outcomes of safety and satisfaction with regards to service delivery. Optimisation for simulation was conducted by simultaneously minimising the average response time and average queuing time, while maximising throughput ratios. Increasing the number of ambulances influenced the average response time below a certain threshold, beyond this threshold, the average response time remained constant rather than decreasing gradually. Ambulance utilisation inversely varied to increase in the feet size. Numerical experiments revealed that reducing the response time results in the reduction in number of ambulances required for optimal ambulance deployment. It is imperative to simultaneously consider multiple performance indicators in ambulance deployment as it balances resource allocation and capacity utilisation, while avoiding idleness of essential equipment and human resources. Management should lobby for de-congestion and resurfacing of old and dilapidated roads to increase access and speed when responding to emergency calls. Future research should investigate the influence of varying service time on optimum deployment plans and consider operational costs, wages and other budgetary constraints that influence the allocation of critical but scarce resources such as personnel, equipment and emergency ambulance response vehicles

    Proceedings of the 5th International Workshop on Reconfigurable Communication-centric Systems on Chip 2010 - ReCoSoC\u2710 - May 17-19, 2010 Karlsruhe, Germany. (KIT Scientific Reports ; 7551)

    Get PDF
    ReCoSoC is intended to be a periodic annual meeting to expose and discuss gathered expertise as well as state of the art research around SoC related topics through plenary invited papers and posters. The workshop aims to provide a prospective view of tomorrow\u27s challenges in the multibillion transistor era, taking into account the emerging techniques and architectures exploring the synergy between flexible on-chip communication and system reconfigurability
    corecore