311 research outputs found

    Aggregate modeling in semiconductor manufacturing using effective process times

    Get PDF
    In modern manufacturing, model-based performance analysis is becoming increasingly important due to growing competition and high capital investments. In this PhD project, the performance of a manufacturing system is considered in the sense of throughput (number of products produced per time unit), cycle time (time that a product spends in a manufacturing system), and the amount of work in process (amount of products in the system). The focus of this project is on semiconductor manufacturing. Models facilitate in performance improvement by providing a systematic connection between operational decisions and performance measures. Two common model types are analytical models, and discrete-event simulation models. Analytical models are fast to evaluate, though incorporation of all relevant factory-fl oor aspects is difficult. Discrete-event simulation models allow for the inclusion of almost any factory-fl oor aspect, such that a high prediction accuracy can be achieved. However, this comes at the cost of long computation times. Furthermore, data on all the modeled aspects may not be available. The number of factory-fl oor aspects that have to be modeled explicitly can be reduced signiffcantly through aggregation. In this dissertation, simple aggregate analytical or discrete-event simulation models are considered, with only a few parameters such as the mean and the coeffcient of variation of an aggregated process time distribution. The aggregate process time lumps together all the relevant aspects of the considered system, and is referred to as the Effective Process Time (EPT) in this dissertation. The EPT may be calculated from the raw process time and the outage delays, such as machine breakdown and setup. However, data on all the outages is often not available. This motivated previous research at the TU/e to develop algorithms which can determine the EPT distribution directly from arrival and departure times, without quantifying the contributing factors. Typical for semiconductor machines is that they often perform a sequence of processes in the various machine chambers, such that wafers of multiple lots are in process at the same time. This is referred to as \lot cascading". To model this cascading behavior, in previous work at the TU/e an aggregate model was developed in which the EPT depends on the amount of Work In Process (WIP). This model serves as the starting point of this dissertation. This dissertation presents the efforts to further develop EPT-based aggregate modeling for application in semiconductor manufacturing. In particular, the dissertation contributes to: dealing with the typically limited amount of available data, modeling workstations with a variable product mix, predicting cycle time distributions, and aggregate modeling of networks of workstations. First, the existing aggregate model with WIP-dependent EPTs has been extended with a curve-fitting approach to deal with the limited amount of arrivals and departures that can be collected in a realistic time period. The new method is illustrated for four operational semiconductor workstations in the Crolles2 semiconductor factory (in Crolles, France), for which the mean cycle time as a function of the throughput has been predicted. Second, a new EPT-based aggregate model that predicts the mean cycle time of a workstation as a function of the throughput, and the product mix has been developed. In semiconductor manufacturing, many workstations produce a mix of different products, and each machine in the workstation may be qualified to process a subset of these products only. The EPT model is validated on a simulation case, and on an industry case of an operational Crolles2 workstation. Third, the dissertation presents a new EPT-based aggregate model that can predict the cycle time distribution of a workstation instead of only the mean cycle time. To accurately predict a cycle time distribution, the order in which lots are processed is incorporated in the aggregate model by means of an overtaking distribution. An extensive simulation study and an industry case demonstrate that the aggregate model can accurately predict the cycle time distribution of integrated processing workstations in semiconductor manufacturing. Finally, aggregate modeling of networks of semiconductor workstations has been explored. Two modeling approaches are investigated: the entire network is modeled as a single aggregate server, and the network is modeled as an aggregate network that consists of an aggregate model for each workstation. The accuracy of the model predictions using the two approaches is investigated by means of a simulation case of a re-entrant ow line. The results of these aggregate models are promising

    Development and Simulation Assessment of Semiconductor Production System Enhancements for Fast Cycle Times

    Get PDF
    Long cycle times in semiconductor manufacturing represent an increasing challenge for the industry and lead to a growing need of break-through approaches to reduce it. Small lot sizes and the conversion of batch processes to mini-batch or single-wafer processes are widely regarded as a promising means for a step-wise cycle time reduction. Our analysis with discrete-event simulation and queueing theory shows that small lot size and the replacement of batch tools with mini-batch or single wafer tools are beneficial but lot size reduction lacks persuasive effectiveness if reduced by more than half. Because the results are not completely convincing, we develop a new semiconductor tool type that further reduces cycle time by lot streaming leveraging the lot size reduction efforts. We show that this combined approach can lead to a cycle time reduction of more than 80%

    Workload characterization, modeling, and prediction in grid Computing

    Get PDF
    Workloads play an important role in experimental performance studies of computer systems. This thesis presents a comprehensive characterization of real workloads on production clusters and Grids. A variety of correlation structures and rich scaling behavior are identified in workload attributes such as job arrivals and run times, including pseudo-periodicity, long range dependence, and strong temporal locality. Based on the analytic results workload models are developed to fit the real data. For job arrivals three different kinds of autocorrelations are investigated. For short to middle range dependent data, Markov modulated Poisson processes (MMPP) are good models because they can capture correlations between interarrival times while remaining analytically tractable. For long range dependent and multifractal processes, the multifractal wavelet model (MWM) is able to reconstruct the scaling behavior and it provides a coherent wavelet framework for analysis and synthesis. Pseudo-periodicity is a special kind of autocorrelation and it can be modeled by a matching pursuit approach. For workload attributes such as run time a new model is proposed that can fit not only the marginal distribution but also the second order statistics such as the autocorrelation function (ACF). The development of workload models enable the simulation studies of Grid scheduling strategies. By using the synthetic traces, the performance impacts of workload correlations in Grid scheduling is quantitatively evaluated. The results indicate that autocorrelations in workload attributes can cause performance degradation, in some situations the difference can be up to several orders of magnitude. The larger the autocorrelation, the worse the performance, it is proved both at the cluster and Grid level. This study shows the importance of realistic workload models in performance evaluation studies. Regarding performance predictions, this thesis treats the targeted resources as a ``black box'' and takes a statistical approach. It is shown that statistical learning based methods, after a well-thought and fine-tuned design, are able to deliver good accuracy and performance.UBL - phd migration 201

    Control Strategies for Improving Cloud Service Robustness

    Get PDF
    This thesis addresses challenges in increasing the robustness of cloud-deployed applications and services to unexpected events and dynamic workloads. Without precautions, hardware failures and unpredictable large traffic variations can quickly degrade the performance of an application due to mismatch between provisioned resources and capacity needs. Similarly, disasters, such as power outages and fire, are unexpected events on larger scale that threatens the integrity of the underlying infrastructure on which an application is deployed.First, the self-adaptive software concept of brownout is extended to replicated cloud applications. By monitoring the performance of each application replica, brownout is able to counteract temporary overload situations by reducing the computational complexity of jobs entering the system. To avoid existing load balancers interfering with the brownout functionality, brownout-aware load balancers are introduced. Simulation experiments show that the proposed load balancers outperform existing load balancers in providing a high quality of service to as many end users as possible. Experiments in a testbed environment further show how a replicated brownout-enabled application is able to maintain high performance during overloads as compared to its non-brownout equivalent.Next, a feedback controller for cloud autoscaling is introduced. Using a novel way of modeling the dynamics of typical cloud application, a mechanism similar to the classical Smith predictor to compensate for delays in reconfiguring resource provisioning is presented. Simulation experiments show that the feedback controller is able to achieve faster control of the response times of a cloud application as compared to a threshold-based controller.Finally, a solution for handling the trade-off between performance and disaster tolerance for geo-replicated cloud applications is introduced. An automated mechanism for differentiating application traffic and replication traffic, and dynamically managing their bandwidth allocations using an MPC controller is presented and evaluated in simulation. Comparisons with commonly used static approaches reveal that the proposed solution in overload situations provides increased flexibility in managing the trade-off between performance and data consistency

    Advances in Condition Monitoring, Optimization and Control for Complex Industrial Processes

    Get PDF
    The book documents 25 papers collected from the Special Issue “Advances in Condition Monitoring, Optimization and Control for Complex Industrial Processes”, highlighting recent research trends in complex industrial processes. The book aims to stimulate the research field and be of benefit to readers from both academic institutes and industrial sectors

    From Social Simulation to Integrative System Design

    Full text link
    As the recent financial crisis showed, today there is a strong need to gain "ecological perspective" of all relevant interactions in socio-economic-techno-environmental systems. For this, we suggested to set-up a network of Centers for integrative systems design, which shall be able to run all potentially relevant scenarios, identify causality chains, explore feedback and cascading effects for a number of model variants, and determine the reliability of their implications (given the validity of the underlying models). They will be able to detect possible negative side effect of policy decisions, before they occur. The Centers belonging to this network of Integrative Systems Design Centers would be focused on a particular field, but they would be part of an attempt to eventually cover all relevant areas of society and economy and integrate them within a "Living Earth Simulator". The results of all research activities of such Centers would be turned into informative input for political Decision Arenas. For example, Crisis Observatories (for financial instabilities, shortages of resources, environmental change, conflict, spreading of diseases, etc.) would be connected with such Decision Arenas for the purpose of visualization, in order to make complex interdependencies understandable to scientists, decision-makers, and the general public.Comment: 34 pages, Visioneer White Paper, see http://www.visioneer.ethz.c

    The dynamics of dense water cascades: from laboratory scales to the Arctic Ocean

    Get PDF
    The sinking of dense shelf waters down the continental slope (or “cascading”) contributes to oceanic water mass formation and carbon cycling. Cascading is therefore of significant importance for the global overturning circulation and thus climate. The occurrence of cascades is highly intermittent in space and time and observations of the process itself (rather than its outcomes) are scarce. Global climate models do not typically resolve cascading owing to numerical challenges concerning turbulence, mixing and faithful representation of bottom boundary layer dynamics. This work was motivated by the need to improve the representation of cascading in numerical ocean circulation models. Typical 3-D hydrostatic ocean circulation models are employed in a series of numerical experiments to investigate the process of dense water cascading in both idealised and realistic model setups. Cascading on steep bottom topography is modelled using POLCOMS, a 3-D ocean circulation model using a terrain-following s-coordinate system. The model setup is based on a laboratory experiment of a continuous dense water flow from a central source on a conical slope in a rotating tank. The descent of the dense flow as characterised by the length of the plume as a function of time is studied for a range of parameters, such as density difference, speed of rotation, flow rate and (in the model) diffusivity and viscosity. Very good agreement between the model and the laboratory results is shown in dimensional and non-dimensional variables. It is confirmed that a hydrostatic model is capable of reproducing the essential physics of cascading on a very steep slope if the model correctly resolves velocity veering in the bottom boundary layer. Experiments changing the height of the bottom Ekman layer (by changing viscosity) and modifying the plume from a 2-layer system to a stratified regime (by enhancing diapycnal diffusion) confirm previous theories, demonstrate their limitations and offer new insights into the dynamics of cascading outside of the controlled laboratory conditions. In further numerical experiments, the idealised geometry of the conical slope is retained but up-scaled to oceanic dimensions. The NEMO-SHELF model is used to study the fate of a dense water plume of similar properties to the overflow of brine-enriched shelf waters from the Storfjorden in Svalbard. The overflow plume, resulting from sea ice formation in the Storfjorden polynya, cascades into the ambient stratification resembling the predominant water masses of Fram Strait. At intermediate depths between 200-500m the plume encounters a layer of warm, saline AtlanticWater. In some years the plume ‘pierces’ the Atlantic Layer and sinks into the deep Fram Strait while in other years it remains ‘arrested’ at Atlantic Layer depths. It has been unclear what parameters control whether the plume pierces the Atlantic Layer or not. In a series of experiments we vary the salinity ‘S’ and the flow rate ‘Q’ of the simulated Storfjorden overflow to investigate both strong and weak cascading conditions. Results show that the cascading regime (piercing, arrested or ‘shaving’ - an intermediate case) can be predicted from the initial values of S and Q. In those model experiments where the initial density of the overflow water is considerably greater than of the deepest ambient water mass we find that a cascade with high initial S does not necessarily reach the bottom if Q is low. Conversely, cascades with an initial density just slightly higher than the deepest ambient layer may flow to the bottom if the flow rate Q is high. A functional relationship between S/Q and the final depth level of plume waters is explained by the flux of potential energy (arising from the introduction of dense water at shallow depth) which, in our idealised setting, represents the only energy source for downslope descent and mixing. Lastly, the influence of tides on the propagation of a dense water plume is investigated using a regional NEMO-SHELF model with realistic bathymetry, atmospheric forcing, open boundary conditions and tides. The model has 3 km horizontal resolution and 50 vertical levels in the sh-coordinate system which is specially designed to resolve bottom boundary layer processes. Tidal effects are isolated by comparing results from model runs with and without tides. A hotspot of tidally-induced horizontal diffusion leading to the lateral dispersion of the plume is identified at the southernmost headland of Spitsbergen which is in close proximity to the plume path. As a result the lighter fractions in the diluted upper layer of the plume are drawn into the shallow coastal current that carries Storfjorden water onto the Western Svalbard Shelf, while the dense bottom layer continues to sink down the slope. This bifurcation of the plume into a diluted shelf branch and a dense downslope branch is enhanced by tidally-induced shear dispersion at the headland. Tidal effects at the headland are shown to cause a net reduction in the downslope flux of Storfjorden water into deep Fram Strait. This finding contrasts previous results from observations of a dense plume on a different shelf without abrupt topography. The dispersive mechanism which is induced by the tides is identified as a mechanism by which tides may cause a relative reduction in downslope transport, thus adding to existing understanding of tidal effects on dense water overflows.NER

    Modeling and Control of Server-based Systems

    Get PDF
    When deploying networked computing-based applications, proper resource management of the server-side resources is essential for maintaining quality of service and cost efficiency. The work presented in this thesis is based on six papers, all investigating problems that relate to resource management of server-based systems. Using a queueing system approach we model the performance of a database system being subjected to write-heavy traffic. We then evaluate the model using simulations and validate that it accurately mimics the behavior of a real test bed. In collaboration with Ericsson we model and design a per-request admission control scheme for a Mobile Service Support System (MSS). The model is then validated and the control scheme is evaluated in a test bed. Also, we investigate the feasibility to estimate the state of a server in an MSS using an event-based Extended Kalman Filter. In the brownout paradigm of server resource management, the amount of work required to serve a client is adjusted to compensate for temporary resource shortages. In this thesis we investigate how to perform load balancing over self-adaptive server instances. The load balancing schemes are evaluated in both simulations and test bed experiments. Further, we investigate how to employ delay-compensated feedback control to automatically adjust the amount of resources to deploy to a cloud application in the presence of a large, stochastic delay. The delay-compensated control scheme is evaluated in simulations and the conclusion is that it can be made fast and responsive compared to an industry-standard solution

    Computational Methods in Science and Engineering : Proceedings of the Workshop SimLabs@KIT, November 29 - 30, 2010, Karlsruhe, Germany

    Get PDF
    In this proceedings volume we provide a compilation of article contributions equally covering applications from different research fields and ranging from capacity up to capability computing. Besides classical computing aspects such as parallelization, the focus of these proceedings is on multi-scale approaches and methods for tackling algorithm and data complexity. Also practical aspects regarding the usage of the HPC infrastructure and available tools and software at the SCC are presented

    Better Science Through an Enhanced User Interface with the ALMA Archive

    Get PDF
    Trabalho de projeto de mestrado, Engenharia InformĂĄtica (Engenharia de Software), 2022, Universidade de Lisboa, Faculdade de CiĂȘnciasThe Atacama Large Millimetre Array, located on the homonymous Chilean desert, constitutes one of the world’s most advanced interferometric observatories. Covering the millimetre and sub-millimetre wavelengths, the facility supports the needs of an ever-growing community within astronomy and astrophysics research groups. As it happens with all ground-based astronomical instruments, however, data storage and exploitation remain an outstanding challenge; while the former can be managed through technical upgrades to the Array instrumentation, the latter largely depends on how the archival data is accessed, analysed and presented to the user. Currently, ALMA observations are made publicly available through the ASA (ALMA Science Archive) online platform. Despite offering native, metadata-based filtering that can be applied to an observation’s various fields, such as location, spectral windows or integration time, it is understood that the ASA would benefit from supplementary tools improving its visualization components, allowing for a more direct assessment of both the archive’s global state and its more localized clusters (e.g., field density distribution). This thesis describes the status of an ongoing project within the Institute of Astrophysics and Space Sciences (IA), foreseeing the development and production of a separate website that provides the user with a set of data analysis and visualization tools; while other functionalities are predicted to be included, the platform will mainly focus on the visual plotting of specific regions containing ALMA observations, enabling the users to directly identify particularly deep regions of the sky. Apart from allowing for a better scientific exploitation of pre-existing astronomical data through the combination of interrelated observations, the visualization toolkit may also increase the archive’s appeal and utility among non-expert users, promoting a larger public engagement with the ESO’s activities; on a longterm basis, it is intended that this platform will come to represent a valuable asset to the ALMA community
    • 

    corecore