5 research outputs found

    Stochastic optimization of product-machine qualification in a semiconductor back-end facility

    Get PDF
    abstract: In order to process a product in a semiconductor back-end facility, a machine needs to be qualified, first by having product-specific software installed and then running test wafers through it to verify that the machine is capable of performing the process correctly. In general, not all machines are qualified to process all products due to the high machine qualification cost and tool set availability. The machine qualification decision affects future capacity allocation in the facility and subsequently affects daily production schedules. To balance the tradeoff between current machine qualification costs and future potential backorder costs due to not enough machines qualified with uncertain demand, a stochastic product–machine qualification optimization model is proposed in this article. The L-shaped method and acceleration techniques are proposed to solve the stochastic model. Computational results are provided to show the necessity of the stochastic model and the performance of different solution methods.This is an Author's Accepted Manuscript of an article published as Fu, Mengying, Askin, Ronald, Fowler, John, & Zhang, Muhong (2015). Stochastic optimization of product-machine qualification in a semiconductor back-end facility. IIE TRANSACTIONS, 47(7), 739-750. DOI: 10.1080/0740817X.2014.964887. Copyright Taylor & Francis, available online at: http://www.tandfonline.com/doi/abs/10.1080/0740817X.2014.96488

    A survey of scheduling problems with setup times or costs

    Get PDF
    Author name used in this publication: C. T. NgAuthor name used in this publication: T. C. E. Cheng2007-2008 > Academic research: refereed > Publication in refereed journalAccepted ManuscriptPublishe

    Theory of Resource Allocation for Robust Distributed Computing

    Get PDF
    Lately, distributed computing (DC) has emerged in several application scenarios such as grid computing, high-performance and reconfigurable computing, wireless sensor networks, battle management systems, peer-to-peer networks, and donation grids. When DC is performed in these scenarios, the distributed computing system (DCS) supporting the applications not only exhibits heterogeneous computing resources and a significant communication latency, but also becomes highly dynamic due to the communication network as well as the computing servers are affected by a wide class of anomalies that change the topology of the system in a random fashion. These anomalies exhibit spatial and/or temporal correlation when they result, for instance, from wide-area power or network outages These correlated failures may not only inflict a large amount of damage to the system, but they may also induce further failures in other servers as a result of the lack of reliable communication between the components of the DCS. In order to provide a robust DC environment in the presence of component failures, it is key to develop a general framework for accurately modeling the complex dynamics of a DCS. In this dissertation a novel approach has been undertaken for modeling a general class of DCSs and for analytically characterizing the performance and reliability of parallel applications executed on such systems. A general probabilistic model has been constructed by assuming that the random times governing the dynamics of the DCS follow arbitrary probability distributions with heterogeneous parameters. Auxiliary age variables have been introduced in the modeling of a DCS and a hybrid continuous and discrete state-space model the system has been constructed. This hybrid model has enabled the development of an age-dependent stochastic regeneration theory, which, in turn, has been employed to analytically characterize the average execution time, the quality-of-service and the reliability in serving an application. These are three metrics of performance and reliability of practical interest in DC. Analytical approximations as well as mathematical lower and upper bounds for these metrics have also been derived in an attempt to reduce the amount of computational resources demanded by the exact characterizations. In order to systematically assess the reliability of DCSs in the presence of correlated component failures, a novel probabilistic model for spatially correlated failures has been developed. The model, based on graph theory and Markov random fields, captures both geographical and logical correlations induced by the arbitrary topology of the communication network of a DCS. The modeling framework, in conjunction with a general class of dynamic task reallocation (DTR) control policies, has been used to optimize the performance and reliability of applications in the presence of independent as well as spatially correlated anomalies. Theoretical predictions, Monte- Carlo simulations as well as experimental results have shown that optimizing these metrics can significantly impact the performance of a DCS. Moreover, the general setting developed here has shed insights on: (i) the effect of different stochastic mod- els on the accuracy of the performance and reliability metrics, (ii) the dependence of the DTR policies on system parameters such as failure rates and task-processing rates, (iii) the severe impact of correlated failures on the reliability of DCSs, (iv) the dependence of the DTR policies on degree of correlation in the failures, and (v) the fundamental trade-off between minimizing the execution time of an application and maximizing its reliability

    Energy Management

    Get PDF
    Forecasts point to a huge increase in energy demand over the next 25 years, with a direct and immediate impact on the exhaustion of fossil fuels, the increase in pollution levels and the global warming that will have significant consequences for all sectors of society. Irrespective of the likelihood of these predictions or what researchers in different scientific disciplines may believe or publicly say about how critical the energy situation may be on a world level, it is without doubt one of the great debates that has stirred up public interest in modern times. We should probably already be thinking about the design of a worldwide strategic plan for energy management across the planet. It would include measures to raise awareness, educate the different actors involved, develop policies, provide resources, prioritise actions and establish contingency plans. This process is complex and depends on political, social, economic and technological factors that are hard to take into account simultaneously. Then, before such a plan is formulated, studies such as those described in this book can serve to illustrate what Information and Communication Technologies have to offer in this sphere and, with luck, to create a reference to encourage investigators in the pursuit of new and better solutions
    corecore