905 research outputs found

    Aggregate modeling in semiconductor manufacturing using effective process times

    Get PDF
    In modern manufacturing, model-based performance analysis is becoming increasingly important due to growing competition and high capital investments. In this PhD project, the performance of a manufacturing system is considered in the sense of throughput (number of products produced per time unit), cycle time (time that a product spends in a manufacturing system), and the amount of work in process (amount of products in the system). The focus of this project is on semiconductor manufacturing. Models facilitate in performance improvement by providing a systematic connection between operational decisions and performance measures. Two common model types are analytical models, and discrete-event simulation models. Analytical models are fast to evaluate, though incorporation of all relevant factory-fl oor aspects is difficult. Discrete-event simulation models allow for the inclusion of almost any factory-fl oor aspect, such that a high prediction accuracy can be achieved. However, this comes at the cost of long computation times. Furthermore, data on all the modeled aspects may not be available. The number of factory-fl oor aspects that have to be modeled explicitly can be reduced signiffcantly through aggregation. In this dissertation, simple aggregate analytical or discrete-event simulation models are considered, with only a few parameters such as the mean and the coeffcient of variation of an aggregated process time distribution. The aggregate process time lumps together all the relevant aspects of the considered system, and is referred to as the Effective Process Time (EPT) in this dissertation. The EPT may be calculated from the raw process time and the outage delays, such as machine breakdown and setup. However, data on all the outages is often not available. This motivated previous research at the TU/e to develop algorithms which can determine the EPT distribution directly from arrival and departure times, without quantifying the contributing factors. Typical for semiconductor machines is that they often perform a sequence of processes in the various machine chambers, such that wafers of multiple lots are in process at the same time. This is referred to as \lot cascading". To model this cascading behavior, in previous work at the TU/e an aggregate model was developed in which the EPT depends on the amount of Work In Process (WIP). This model serves as the starting point of this dissertation. This dissertation presents the efforts to further develop EPT-based aggregate modeling for application in semiconductor manufacturing. In particular, the dissertation contributes to: dealing with the typically limited amount of available data, modeling workstations with a variable product mix, predicting cycle time distributions, and aggregate modeling of networks of workstations. First, the existing aggregate model with WIP-dependent EPTs has been extended with a curve-fitting approach to deal with the limited amount of arrivals and departures that can be collected in a realistic time period. The new method is illustrated for four operational semiconductor workstations in the Crolles2 semiconductor factory (in Crolles, France), for which the mean cycle time as a function of the throughput has been predicted. Second, a new EPT-based aggregate model that predicts the mean cycle time of a workstation as a function of the throughput, and the product mix has been developed. In semiconductor manufacturing, many workstations produce a mix of different products, and each machine in the workstation may be qualified to process a subset of these products only. The EPT model is validated on a simulation case, and on an industry case of an operational Crolles2 workstation. Third, the dissertation presents a new EPT-based aggregate model that can predict the cycle time distribution of a workstation instead of only the mean cycle time. To accurately predict a cycle time distribution, the order in which lots are processed is incorporated in the aggregate model by means of an overtaking distribution. An extensive simulation study and an industry case demonstrate that the aggregate model can accurately predict the cycle time distribution of integrated processing workstations in semiconductor manufacturing. Finally, aggregate modeling of networks of semiconductor workstations has been explored. Two modeling approaches are investigated: the entire network is modeled as a single aggregate server, and the network is modeled as an aggregate network that consists of an aggregate model for each workstation. The accuracy of the model predictions using the two approaches is investigated by means of a simulation case of a re-entrant ow line. The results of these aggregate models are promising

    Aggregate modeling of manufacturing systems

    Get PDF
    In this report we will present three approaches to model manufacturing systems in an aggregate way leading to fast and effective (i.e., scalable) simulations that allow the development of simulation tools for rapid exploration of different production scenarios in a factory as well as in a whole supply chain. We will present the main ideas and show some validation studies. Fundamental references are given for more detailed studies

    Transient analysis of manufacturing system performance

    Get PDF
    Includes bibliographical references (p. 28-34).Supported by the INDO-US Science and Technology Fellowship Program.Y. Narahari, N. Viswanadham

    Valuation of spectrum for mobile broadband services: Engineering value versus willingness to pay

    Get PDF
    Radio spectrum is a vital asset and resource for mobile network operators. With spectrum in the 800 and 900 MHz bands coverage can be provided with fewer base station sites compared to higher frequency bands like 2.1 and 2.6 GHz. With more spectrum, i.e. wider bandwidth, operators can offer higher capacity and data rates. Larger bandwidths means that capacity can be provided with fewer base station sites, i.e. with lower cost. Operators that acquire more spectrum in existing or new bands can re-use existing sites for capacity build out. Engineering value is one way to estimate the marginal value of spectrum. The calculation of engineering value is based on comparison of different network deployment options using different amounts of spectrum. This paper compare estimates of engineering value of spectrum with prices paid at a number of spectrum auctions, with a focus on Sweden. A main finding is that estimated engineering value of spectrum is much higher than prices operators have paid at spectrum auctions during the last couple of years. The analysis also includes a discussion of drivers that determine the willingness to pay for spectrum.Radio spectrum,mobile communications,spectrum valuation,spectrum allocation,mobile broadband,marginal value of spectrum,engineering value

    GeantV: Results from the prototype of concurrent vector particle transport simulation in HEP

    Full text link
    Full detector simulation was among the largest CPU consumer in all CERN experiment software stacks for the first two runs of the Large Hadron Collider (LHC). In the early 2010's, the projections were that simulation demands would scale linearly with luminosity increase, compensated only partially by an increase of computing resources. The extension of fast simulation approaches to more use cases, covering a larger fraction of the simulation budget, is only part of the solution due to intrinsic precision limitations. The remainder corresponds to speeding-up the simulation software by several factors, which is out of reach using simple optimizations on the current code base. In this context, the GeantV R&D project was launched, aiming to redesign the legacy particle transport codes in order to make them benefit from fine-grained parallelism features such as vectorization, but also from increased code and data locality. This paper presents extensively the results and achievements of this R&D, as well as the conclusions and lessons learnt from the beta prototype.Comment: 34 pages, 26 figures, 24 table

    Equipment management trial : final report

    Get PDF
    Executive Summary The Equipment Management (EM) trial was one of the practical initiatives conceived and implemented by members of The Application Home Initiative (TAHI) to demonstrate the feasibility of interoperability between white and brown goods, and other domestic equipment. The trial ran from October 2002 to June 2005, over which period it achieved its core objectives through the deployment in early 2005 of an integrated system in trials in 15 occupied homes. Prior to roll out into the field, the work was underpinned by soak testing, validation, laboratory experiments, case studies, user questionnaires, simulations and other research, conducted in a single demonstration home in Loughborough, as well as in Universities in the East Midlands and Scotland. Throughout its life, the trial faced significant membership changes, which had a far greater impact than the technical issues that were tackled. Two blue chip companies withdrew at the point of signing the collaborative agreement; another made a major change in strategic direction half way through and withdrew the major portion of its backing; another corporate left at this point, a second one later; one corporate was a late entrant; the technical leader made a boardroom decision not to do the engineering work that it had promised; one company went into liquidation; another went up for sale whilst others reorganised. The trial was conducted against this backdrop of continual commercial change. Despite this difficult operating environment, the trial met its objectives, although not entirely as envisaged initially – a tribute to the determination of the trial’s membership, the strength of its formal governance and management processes, and especially, the financial support of the dti. The equipment on trial featured a central heating/hot water boiler, washing machine, security system, gas alarm and utility meters, all connected to a home gateway, integrated functionally and presented to the users via a single interface. The trial met its principal objective to show that by connecting appliances to each other and to a support system, benefits in remote condition monitoring, maintenance, appliance & home controls optimisation and convenience to the customer & service supplier could be provided. This is one of two main reports that form the trial output (the other, the Multi Home Trial Report, is available to EM Trial members only as it contains commercially sensitive information). A supporting library of documents is also available and is held in the virtual office hosted by Loughborough University Centre for the Integrated Home Environment

    Golgi anti-apoptotic proteins are highly conserved ion channels that affect apoptosis and cell migration.

    Get PDF
    Golgi anti-apoptotic proteins (GAAPs) are multitransmembrane proteins that are expressed in the Golgi apparatus and are able to homo-oligomerize. They are highly conserved throughout eukaryotes and are present in some prokaryotes and orthopoxviruses. Within eukaryotes, GAAPs regulate the Ca(2+) content of intracellular stores, inhibit apoptosis, and promote cell adhesion and migration. Data presented here demonstrate that purified viral GAAPs (vGAAPs) and human Bax inhibitor 1 form ion channels and that vGAAP from camelpox virus is selective for cations. Mutagenesis of vGAAP, including some residues conserved in the recently solved structure of a related bacterial protein, BsYetJ, altered the conductance (E207Q and D219N) and ion selectivity (E207Q) of the channel. Mutation of residue Glu-207 or -178 reduced the effects of GAAP on cell migration and adhesion without affecting protection from apoptosis. In contrast, mutation of Asp-219 abrogated the anti-apoptotic activity of GAAP but not its effects on cell migration and adhesion. These results demonstrate that GAAPs are ion channels and define residues that contribute to the ion-conducting pore and affect apoptosis, cell adhesion, and migration independently.This work was supported by the United Kingdom Medical Research Council, the Biotechnology and Biological Sciences Research Council, and the Wellcome TrustThis is the final published version. It first appeared at http://www.jbc.org/content/290/18/11785.long

    Analytical Approximations to Predict Performance Measures of Manufacturing Systems with Job Failures and Parallel Processing

    Get PDF
    Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others

    Performance evaluation of shuttle-based storage and retrieval systems using discrete-time queueing network models

    Get PDF
    Shuttle-based storage and retrieval systems (SBS/RSs) are an important part of today‘s warehouses. In this work, a new approach is developed that can be applied to model different configurations of SBS/RSs. The approach is based on the modeling of SBS/RSs as discrete-time open queueing networks and yields the complete probability distributions of the performance measures
    • 

    corecore