1,533 research outputs found
Recommended from our members
Simulation and optimization techniques applied in semiconductor assembly and test operations
The importance of back-end operations in semiconductor manufacturing has been growing steadily in the face of higher customer expectations and stronger competition in the industry. In order to achieve low cycle times, high throughput, and high utilization while improving due-date performance, more effective tools are needed to support machine setup and lot dispatching decisions. In previous work, the problem of maximizing the weighted throughput of lots undergoing assembly and test (AT), while ensuring that critical lots are given priority, was investigated and a greedy randomized adaptive search procedure (GRASP) developed to find solutions. Optimization techniques have long been used for scheduling manufacturing operations on a daily basis. Solutions provide a prescription for machine setups and job processing over a finite the planning horizon. In contrast, simulation provides more detail but in a normative sense. It tells you how the system will evolve in real time for a given demand, a given set of resources and rules for using them. A simulation model can also accommodate changeovers, initial setups and multi-pass requirements easily. The first part of the research is to show how the results of an optimization model can be integrated with the decisions made within a simulation model. The problem addressed is defined in terms of four hierarchical objectives: minimize the weighted sum of key device shortages, maximize weighted throughput, minimize the number of machines used, and minimize makespan for a given set of lots in queue, and a set of resources that includes machines and tooling. The facility can be viewed as a reentrant flow shop. The basic simulation was written in AutoSched AP (ASAP) and then enhanced with the help of customization features available in the software. Several new dispatch rules were developed. Rule_First_setup is able to initialize the simulation with the setups obtained with the GRASP. Rule_All_setups enables a machine to select the setup provided by the optimization solution whenever a decision is about to be made on which setup to choose subsequent to the initial setup. Rule_Hotlot was also proposed to prioritize the processing of the hot lots that contain key devices. The objective of the second part of the research is to design and implement heuristics within the simulation model to schedule back-end operations in a semiconductor AT facility. Rule_Setupnum lets the machines determine which key device to process according to a machine setup frequency table constructed from the GRASP solution. GRASP_asap embeds a more robust selection features of GRASP in the ASAP model through customization. This allows ASAP to explore a larger portion of the feasible region at each decision point by randomizing machine setups using adaptive probability distributions that are a function of solution quality. Rule_Greedy, which is a simplification of GRASP_asap, always picks the setup for a particular machine that gives the greatest marginal improvement in the objective function among all candidates. The purpose of the third part of the research is to statistically validate the relative effectiveness of our top six dispatch rules by comparing their performance on 30 real and randomly generated data sets. Using both GRASP and our ASAP discrete event simulation model, we have (1) identified the general order of dispatch rule performance, (2) investigated the impact of having setups installed on machines at time zero on rule performance, (3) determined the conditions under which restricting the maximum number of changeover affects the rule performance, and (4) studied the factors that might simultaneously affect rule performance with the help of a common random numbers experimental design. In the analysis, the first two objectives, weighted key device shortages and weighted throughput, are used to measure outcomes.Operations Research and Industrial Engineerin
Intelligent shop scheduling for semiconductor manufacturing
Semiconductor market sales have expanded massively to more than 200 billion dollars annually accompanied by increased pressure on the manufacturers to provide higher quality products at lower cost to remain competitive. Scheduling of semiconductor manufacturing is one of the keys to increasing productivity, however the complexity of manufacturing high capacity semiconductor devices and the cost considerations mean that it is impossible to experiment within the facility. There is an immense need for effective decision support models, characterizing and analyzing the manufacturing process, allowing the effect of changes in the production environment to be predicted in order to increase utilization and enhance system performance. Although many simulation models have been developed within semiconductor manufacturing very little research on the simulation of the photolithography process has been reported even though semiconductor manufacturers have recognized that the scheduling of photolithography is one of the most important and challenging tasks due to complex nature of the process.
Traditional scheduling techniques and existing approaches show some benefits for solving small and medium sized, straightforward scheduling problems. However, they have had limited success in solving complex scheduling problems with stochastic elements in an economic timeframe. This thesis presents a new methodology combining advanced solution approaches such as simulation, artificial intelligence, system modeling and Taguchi methods, to schedule a photolithography toolset. A new structured approach was developed to effectively support building the simulation models. A single tool and complete toolset model were developed using this approach and shown to have less than 4% deviation from actual production values. The use of an intelligent scheduling agent for the toolset model shows an average of 15% improvement in simulated throughput time and is currently in use for scheduling the photolithography toolset in a manufacturing plant
A generic method for energy-efficient and energy-cost-effective production at the unit process level
WIP control at end of line of semiconductor industry using CONWIP
Advancement of technology and trends in globalization has resulted in higher customer demands and expectations. Manufacturers now offer mass customization to stay competitive. In the semiconductor industry, where product mix and volume are high, production is further complicated by the different process routes and processing times for different product families. Coupled with rapid changeovers of products, it is essential to keep the work in process (WIP) low in order to reduce the inventory level on the shop floor. Constant W1P (CONWIP) is a production control strategy applicable in many manufacturing environment that use cards to control W1P level. This research was conducted in a semiconductor manufacturing company facing difficulty in reducing the variation in WIP on the shopfloor. The objectives of this research are to design and develop simulation models for single loop CONWIP, multi loop CONW1P, hybrid CONW1P, single loop CONWlP and multi loop CONWIP with buffer size optimization based on the environment in the case company. With the developed models, the maximum throughput (TH) and minimum W1P were determined. Discrete event simulation models were developed using the Witness Software for processes at the End of Line (EOL) production in the company. Experiments were conducted using these models to compare the current system with the single loop, multi loop, and hybrid CONWIP control mechanisms. ln addition, buffer optimization incorporating single loop and multi loop control were also examined. Performance parameters of TH and
WIP level were compared in all experiments. The results show that CONWTP production control is more effective in reducing WlP level compared to the current system. Secondly, the single loop CONWIP showed the least number of cards in the system. However, hybrid CONWIP is more robust and provides a better control mechanism compared to the single and multi loop system. Buffer optimization control can further reduce the number of cards in the single and multi loop control. The developed simulation models are useful to determine the number of cards in the system and buffer size for each process. With these models, the production personnel can
monitor and control the WJP dynamically to meet current demands and utilize the shopfloor space for more productive purposes
A Fuzzy Nonlinear Programming Approach for Optimizing the Performance of a Four-Objective Fluctuation Smoothing Rule in a Wafer Fabrication Factory
In theory, a scheduling problem can be formulated as a mathematical programming problem. In practice, dispatching rules are considered to be a more practical method of scheduling. However, the combination of mathematical programming and fuzzy dispatching rule has rarely been discussed in the literature. In this study, a fuzzy nonlinear programming (FNLP) approach is proposed for optimizing the scheduling performance of a four-factor fluctuation smoothing rule in a wafer fabrication factory. The proposed methodology considers the uncertainty in the remaining cycle time of a job and optimizes a fuzzy four-factor fluctuation-smoothing rule to sequence the jobs in front of each machine. The fuzzy four-factor fluctuation-smoothing rule has five adjustable parameters, the optimization of which results in an FNLP problem. The FNLP problem can be converted into an equivalent nonlinear programming (NLP) problem to be solved. The performance of the proposed methodology has been evaluated with a series of production simulation experiments; these experiments provide sufficient evidence to support the advantages of the proposed method over some existing scheduling methods
Modelling and condition-based control of a flexible and hybrid disassembly system with manual and autonomous workstations using reinforcement learning
Remanufacturing includes disassembly and reassembly of used products to save natural resources and reduce emissions. While assembly is widely understood in the field of operations management, disassembly is a rather new problem in production planning and control. The latter faces the challenge of high uncertainty of type, quantity and quality conditions of returned products, leading to high volatility in remanufacturing production systems. Traditionally, disassembly is a manual labor-intensive production step that, thanks to advances in robotics and artificial intelligence, starts to be automated with autonomous workstations. Due to the diverging material flow, the application of production systems with loosely linked stations is particularly suitable and, owing to the risk of condition induced operational failures, the rise of hybrid disassembly systems that combine manual and autonomous workstations can be expected. In contrast to traditional workstations, autonomous workstations can expand their capabilities but suffer from unknown failure rates. For such adverse conditions a condition-based control for hybrid disassembly systems, based on reinforcement learning, alongside a comprehensive modeling approach is presented in this work. The method is applied to a real-world production system. By comparison with a heuristic control approach, the potential of the RL approach can be proven simulatively using two different test cases
- âŠ