90 research outputs found

    Scheduling of Batch Processors in Semiconductor Manufacturing – A Review

    Get PDF
    In this paper a review on scheduling of batch processors (SBP) in semiconductor manufacturing (SM) is presented. It classifies SBP in SM into 12 groups. The suggested classification scheme organizes the SBP in SM literature, summarizes the current research results for different problem types. The classification results are presented based on various distributions and various methodologies applied for SBP in SM are briefly highlighted. A comprehensive list of references is presented. It is hoped that, this review will provide a source for other researchers/readers interested in SBP in SM research and help simulate further interest.Singapore-MIT Alliance (SMA

    Concurrent design for optimal quality and cycle time

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2001.Includes bibliographical references (p. 113-116).Product and manufacturing system design are the core issues in product development and dominate the profitability of a company. In order to assess and optimize the product and manufacturing system design, an objective evaluation framework is needed. Despite the many existing tools for product and manufacturing system design, there is a missing link between the product design and the production performances under system variability. The goal of the thesis is to explore and understand the interactions among part design and tolerancing, processes and system variability, and system control decision, then provide an integrated model to assess the total cost in a system. This model will be used to aid part design, tolerancing, batching, as well as strategy analysis in process improvement. A two-stage modeling approach is used to tackle the problem: quality prediction and production prediction. The quality prediction model projects the process variations into the output quality variations at each manufacturing stage, then predict the yield rate from the stochastic behavior of the variations and the tolerance. The production prediction model projects the demand rate and variability, processing times and variability, yield rates and batch-sizes into the manufacturing cycle time and inventories. After the performances are predicted through the previous two models, concurrent optimization of part design, tolerance, and batch-sizes are achieved by varying them to find the minimum cost. A case study at Boeing Tube shop is used to illustrate this approach. The result shows that the costless decisions in part design, tolerancing, and batch- sizes can significantly improve the system performance. In addition, conducting them separately or without using the system performance as the evaluation criteria may only lead to the local optima.by Yu-Feng Wei.Ph.D

    Best matching processes in distributed systems

    Get PDF
    The growing complexity and dynamic behavior of modern manufacturing and service industries along with competitive and globalized markets have gradually transformed traditional centralized systems into distributed networks of e- (electronic) Systems. Emerging examples include e-Factories, virtual enterprises, smart farms, automated warehouses, and intelligent transportation systems. These (and similar) distributed systems, regardless of context and application, have a property in common: They all involve certain types of interactions (collaborative, competitive, or both) among their distributed individuals—from clusters of passive sensors and machines to complex networks of computers, intelligent robots, humans, and enterprises. Having this common property, such systems may encounter common challenges in terms of suboptimal interactions and thus poor performance, caused by potential mismatch between individuals. For example, mismatched subassembly parts, vehicles—routes, suppliers—retailers, employees—departments, and products—automated guided vehicles—storage locations may lead to low-quality products, congested roads, unstable supply networks, conflicts, and low service level, respectively. This research refers to this problem as best matching, and investigates it as a major design principle of CCT, the Collaborative Control Theory. The original contribution of this research is to elaborate on the fundamentals of best matching in distributed and collaborative systems, by providing general frameworks for (1) Systematic analysis, inclusive taxonomy, analogical and structural comparison between different matching processes; (2) Specification and formulation of problems, and development of algorithms and protocols for best matching; (3) Validation of the models, algorithms, and protocols through extensive numerical experiments and case studies. The first goal is addressed by investigating matching problems in distributed production, manufacturing, supply, and service systems based on a recently developed reference model, the PRISM Taxonomy of Best Matching. Following the second goal, the identified problems are then formulated as mixed-integer programs. Due to the computational complexity of matching problems, various optimization algorithms are developed for solving different problem instances, including modified genetic algorithms, tabu search, and neighbourhood search heuristics. The dynamic and collaborative/competitive behaviors of matching processes in distributed settings are also formulated and examined through various collaboration, best matching, and task administration protocols. In line with the third goal, four case studies are conducted on various manufacturing, supply, and service systems to highlight the impact of best matching on their operational performance, including service level, utilization, stability, and cost-effectiveness, and validate the computational merits of the developed solution methodologies

    Application of lean scheduling and production control in non-repetitive manufacturing systems using intelligent agent decision support

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Lean Manufacturing (LM) is widely accepted as a world-class manufacturing paradigm, its currency and superiority are manifested in numerous recent success stories. Most lean tools including Just-in-Time (JIT) were designed for repetitive serial production systems. This resulted in a substantial stream of research which dismissed a priori the suitability of LM for non-repetitive non-serial job-shops. The extension of LM into non-repetitive production systems is opposed on the basis of the sheer complexity of applying JIT pull production control in non-repetitive systems fabricating a high variety of products. However, the application of LM in job-shops is not unexplored. Studies proposing the extension of leanness into non-repetitive production systems have promoted the modification of pull control mechanisms or reconfiguration of job-shops into cellular manufacturing systems. This thesis sought to address the shortcomings of the aforementioned approaches. The contribution of this thesis to knowledge in the field of production and operations management is threefold: Firstly, a Multi-Agent System (MAS) is designed to directly apply pull production control to a good approximation of a real-life job-shop. The scale and complexity of the developed MAS prove that the application of pull production control in non-repetitive manufacturing systems is challenging, perplex and laborious. Secondly, the thesis examines three pull production control mechanisms namely, Kanban, Base Stock and Constant Work-in-Process (CONWIP) which it enhances so as to prevent system deadlocks, an issue largely unaddressed in the relevant literature. Having successfully tested the transferability of pull production control to non-repetitive manufacturing, the third contribution of this thesis is that it uses experimental and empirical data to examine the impact of pull production control on job-shop performance. The thesis identifies issues resulting from the application of pull control in job-shops which have implications for industry practice and concludes by outlining further research that can be undertaken in this direction

    Design and Management of Manufacturing Systems

    Get PDF
    Although the design and management of manufacturing systems have been explored in the literature for many years now, they still remain topical problems in the current scientific research. The changing market trends, globalization, the constant pressure to reduce production costs, and technical and technological progress make it necessary to search for new manufacturing methods and ways of organizing them, and to modify manufacturing system design paradigms. This book presents current research in different areas connected with the design and management of manufacturing systems and covers such subject areas as: methods supporting the design of manufacturing systems, methods of improving maintenance processes in companies, the design and improvement of manufacturing processes, the control of production processes in modern manufacturing systems production methods and techniques used in modern manufacturing systems and environmental aspects of production and their impact on the design and management of manufacturing systems. The wide range of research findings reported in this book confirms that the design of manufacturing systems is a complex problem and that the achievement of goals set for modern manufacturing systems requires interdisciplinary knowledge and the simultaneous design of the product, process and system, as well as the knowledge of modern manufacturing and organizational methods and techniques

    Architecting Efficient Data Centers.

    Full text link
    Data center power consumption has become a key constraint in continuing to scale Internet services. As our society’s reliance on “the Cloud” continues to grow, companies require an ever-increasing amount of computational capacity to support their customers. Massive warehouse-scale data centers have emerged, requiring 30MW or more of total power capacity. Over the lifetime of a typical high-scale data center, power-related costs make up 50% of the total cost of ownership (TCO). Furthermore, the aggregate effect of data center power consumption across the country cannot be ignored. In total, data center energy usage has reached approximately 2% of aggregate consumption in the United States and continues to grow. This thesis addresses the need to increase computational efficiency to address this grow- ing problem. It proposes a new classes of power management techniques: coordinated full-system idle low-power modes to increase the energy proportionality of modern servers. First, we introduce the PowerNap server architecture, a coordinated full-system idle low- power mode which transitions in and out of an ultra-low power nap state to save power during brief idle periods. While effective for uniprocessor systems, PowerNap relies on full-system idleness and we show that such idleness disappears as the number of cores per processor continues to increase. We expose this problem in a case study of Google Web search in which we demonstrate that coordinated full-system active power modes are necessary to reach energy proportionality and that PowerNap is ineffective because of a lack of idleness. To recover full-system idleness, we introduce DreamWeaver, architectural support for deep sleep. DreamWeaver allows a server to exchange latency for full-system idleness, allowing PowerNap-enabled servers to be effective and provides a better latency- power savings tradeoff than existing approaches. Finally, this thesis investigates workloads which achieve efficiency through methodical cluster provisioning techniques. Using the popular memcached workload, this thesis provides examples of provisioning clusters for cost-efficiency given latency, throughput, and data set size targets.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/91499/1/meisner_1.pd

    On the distribution of throughput of transfer lines

    Get PDF
    Ankara : Department of Industrial Engineering and the Institute of Engineering and Sciences of Bilkent University, 1998.Thesis (Master's) -- Bilkent University, 1998.Includes bibliographical references leaves 86-107A transfer line corresponds to a manufacturing system consisting of a number of work stations in series integrated into one system by a common transfer mechanism and a control system. There is a vast literature on the transfer lines. However, little has been done on the transient analysis of these systems by making use of the higher order moments of their performance measures due to the difficulty in determining the evolution of the stochastic processes under consideration. This thesis examines the transient behavior of relatively short transfer lines and derives the distribution of the performance measures of interest. The proposed method based on the analytical derivation of the distribution of throughput is also applied to the systems with two-part types. An experiment is designed in order to compare the results of this study with the state-space representations and the simulation. They are also interpreted from the point of view of the line behavior and design issue. Furthermore, extensions are briefly discussed and directions for future research are suggested.Deler, BaharM.S

    Performance of sequential batching-based methods of output data analysis in distributed steady-state stochastic simulation

    Get PDF
    Wir haben die Anpassung von Sequentiellen Analysemethoden von Stochastik Simulationen an einem Szenario von mehreren UnabhĂ€ngigen Replikationen in Parallel (MRIP) untersucht. Die Hauptidee ist, die statistische Kontrole bzw. die Beschleunigung eines Simulationexperiment zu automatisieren. Die vorgeschlagenen Methoden der Literatur sind auf einzelne Prozessorszenarien orientiert. Wenig ist bekannt hinsichtlich der Anwendungen von Verfahen, die auf Methoden unter MRIP basieren. Auf den ersten Blick sind beide Ziele entgegengesetzt, denn man braucht eine grosse Menge von Beobachtungen, um eine hohe QualitĂ€t der Resultate zu erreichen. DafĂŒr benötig man viel Zeit. Man kann jedoch durch einen ausfĂŒrlichen Entwurf zusammen mit einem robusten Werkzeug, das auf unabhĂ€ngige Replikationen basiert ist, ein effizientes Mittel bezĂŒglich Analyse der Resultate produzieren. Diese Recherche wurde mit einer sequentiellen Version des klassischen Verfahren von Nonoverlaping Batch Means (NOBM) angefangen. Obwohl NOBM sehr intuitiv und populĂ€r ist, bietet es keine gute Lösung fĂŒr das Problem starker Autokorrelation zwischen den Beobachtungen an, die normalerweise bei hohen Auslastungen entstehen. Es lohnt sich nicht, grösserer Rechnerleistung zu benutzen, um diese negative Merkmale zu vermindern. Das haben wir mittles einer vollstĂ€ndigen Untersuchung einer Gruppe von Warteschlangsystemen bestĂ€tig. Deswegen haben wir den Entwurf von sequentiellen Versionen von ein paar Varianten von Batch Means vorgeschlagen und sie genauso untersucht. Unter den implementierten Verfahren gibt es ein sehr attraktives: Overlapping Batch Means (OBM). OBM ermöglicht eine bessere Nutzung der Daten, da jede Beobachtungen ein neues Batch anfĂ€ngt, d.h., die Anzahl von Batches ist viel grösser, und das ergibt eine kleinere Varianz. In diesem Fall ist die Anwendung von MRIP empfehlenswert, da diese Kombination weniger Beobachtungen benötigt und somit eine höhere Beschleunigung. Im Laufe der Recherche haben wir eine Klasse von Methoden (Standardized Time Series - STS) untersucht, die teoretisch bessere asymptotische Resultate als NOBM produziert. Die negative Auswirkung von STS ist, dass sie mehr Beobachtungen als die Batch-Means-Verfahren benoetigt. Aber das ist kein Hindernis, wenn wir STS zusammen mit MRIP anwenden. Die experimentelle Untersuchungen bestĂ€tigte, dass die Hypothese richtig ist. Die nĂ€chste Phase war es, OBM und STS einzustellen, um beide Verfahren unter den grösstmöglichen Anzahl von Prozessoren arbeiten lassen zu können. Fallstudien zeigten uns, dass sich beide sequentiellen Verfahren fĂŒr die parallele Simulation sowie MRIP einigen.We investigated the feasibility of sequential methods of analysis of stochastic simulation under an environment of Multiple Replications in Parallel (MRIP). The main idea is twofold, the automation of the statistical control and speedup of simulation experiments. The methods of analysis found suggested in the literature were conceived for a single processor environment. Very few is known concerning the application of procedures based in such methods under MRIP. At first glance, sind both goals in opposition, since one needs a large amount of observations in order to achieve good quality of the results, i.e., the simulation takes frequently long time. However, by means of a careful design, together with a robust simulation tool based on independent replications, one can produce an efficient instrument of analysis of the simulation results. This research began with a sequential version of the classical method of Nonoverlapping Batch Means (NOBM). Although intuitiv and popular, under hight traffic intensity NOBM offers no good solution to the problem of strong correlation among the observations. It is not worthwhile to apply more computing power aiming to diminish this negative effect. We have confirmed this claim by means of a detailed and exhaustive analysis of four queuing systems. Therefore, we proposed the design of sequential versions of some Batch Means variants, and we investigated their statistical properties under MRIP. Among the implemented procedures there is one very attractive : Overlapping Batch Means (OBM). OBM makes a better use of collected data, since each observation initiates a new (overlapped) batch, that is, die number of batches is much larger, and this yields smaller variance. In this case, MRIP is highly recommended, since this combination requires less observations and, therefore, speedup. During the research, we investigated also a class of methods based on Standardized Time Series -- STS, that produces theoretically better asymptotical results than NOBM. The undesired negative effect of STS is the large number of observations it requires, when compared to NOBM. But that is no obstacle when we apply STS together with MRIP. The experimental investigation confirmed this hypothesis. The next phase was to tun OBM and STS, in order to put them working with the possible largest number of processors. A case study showed us that both procedures are suitable to the environment of MRIP
    • 

    corecore