306 research outputs found

    Design-for-delay-testability techniques for high-speed digital circuits

    Get PDF
    The importance of delay faults is enhanced by the ever increasing clock rates and decreasing geometry sizes of nowadays' circuits. This thesis focuses on the development of Design-for-Delay-Testability (DfDT) techniques for high-speed circuits and embedded cores. The rising costs of IC testing and in particular the costs of Automatic Test Equipment are major concerns for the semiconductor industry. To reverse the trend of rising testing costs, DfDT is\ud getting more and more important

    Demystifying reinforcement learning approaches for production scheduling

    Get PDF
    Recent years has seen a sharp rise in interest pertaining to Reinforcement Learning (RL) approaches for production scheduling. This is because RL is seen as a an advantageous compromise between the two most typical scheduling solution approaches, namely priority rules and exact approaches. However, there are many variations of both production scheduling problems and RL solutions. Additionally, the RL production scheduling literature is characterized by a lack of standardization, which leads to the field being shrouded in mysticism. The burden of showcasing the exact situations where RL outshines other approaches still lies with the research community. To pave the way towards this goal, we make the following four contributions to the scientific community, aiding in the process of RL demystification. First, we develop a standardization framework for RL scheduling approaches using a comprehensive literature review as a conduit. Secondly, we design and implement FabricatioRL, an open-source benchmarking simulation framework for production scheduling covering a vast array of scheduling problems and ensuring experiment reproducibility. Thirdly, we create a set of baseline scheduling algorithms sharing some of the RL advantages. The set of RL-competitive algorithms consists of a Constraint Programming (CP) meta-heuristic developed by us, CP3, and two simulation-based approaches namely a novel approach we call Simulation Search and Monte Carlo Tree Search. Fourth and finally, we use FabricatioRL to build two benchmarking instances for two popular stochastic production scheduling problems, and run fully reproducible experiments on them, pitting Double Deep Q Networks (DDQN) and AlphaGo Zero (AZ) against the chosen baselines and priority rules. Our results show that AZ manages to marginally outperform priority rules and DDQN, but fails to outperform our competitive baselines

    Monte-Carlo simulation of colliding particles or coalescing droplets transported by a turbulent flow in the framework of a joint fluid–particle pdf approach

    Get PDF
    The aim of the paper is to introduce and validate a Monte-Carlo algorithm for the prediction of an ensemble of colliding solid particles, or coalescing liquid droplets, suspended in a turbulent gas flow predicted by Reynolds Averaged Navier Stokes approach (RANS). The new algorithm is based on the direct discretization of the collision/coalescence kernel derived in the framework of a joint fluid–particle pdf approach proposed by Simonin et al. (2002). This approach allows to take into account correlations between colliding inertial particle velocities induced by their interaction with the fluid turbulence. Validation is performed by comparing the Monte-Carlo predictions with deterministic simulations of discrete solid particles coupled with Direct Numerical Simulation (DPS/DNS), or Large Eddy Simulation (DPS/LES), where the collision/coalescence effects are treated in a deterministic way. Five cases are investigated: elastic monodisperse particles, non-elastic monodisperse particles, binary mixture of elastic particles and binary mixture of elastic settling particles in turbulent flow and finally coalescing droplets. The predictions using the new Monte-Carlo algorithm are in much better agreement with DPS/DNS results than the ones using the standard algorithm

    Intelligent production control for time-constrained complex job shops

    Get PDF
    Im Zuge der zunehmenden KomplexitĂ€t der Produktion wird der Wunsch nach einer intelligenten Steuerung der AblĂ€ufe in der Fertigung immer grĂ¶ĂŸer. Sogenannte Complex Job Shops bezeichnen dabei die komplexesten Produktionsumgebungen, die deshalb ein hohes Maß an AgilitĂ€t in der Steuerung erfordern. Unter diesen Umgebungen sticht die besonders Halbleiterfertigung hervor, da sie alle KomplexitĂ€ten eines Complex Job-Shop vereint. Deshalb ist die operative Exzellenz der SchlĂŒssel zum Erfolg in der Halbleiterindustrie. Diese Exzellenz hĂ€ngt ganz entscheidend von einer intelligenten Produktionssteuerung ab. Ein Hauptproblem bei der Steuerung solcher Complex Job-Shops, in diesem Fall der Halbleiterfertigung, ist das Vorhandensein von ZeitbeschrĂ€nkungen (sog. time-constraints), die die Transitionszeit von Produkten zwischen zwei, meist aufeinanderfolgenden, Prozessen begrenzen. Die Einhaltung dieser produktspezifischen Zeitvorgaben ist von grĂ¶ĂŸter Bedeutung, da VerstĂ¶ĂŸe zum Verlust des betreffenden Produkts fĂŒhren. Der Stand der Technik bei der Produktionssteuerung dieser Dispositionsentscheidungen, die auf die Einhaltung der Zeitvorgaben abzielen, basiert auf einer fehleranfĂ€lligen und fĂŒr die Mitarbeiter belastenden manuellen Steuerung. In dieser Arbeit wird daher ein neuartiger, echtzeitdatenbasierter Ansatz zur intelligenten Steuerung der Produktionssteuerung fĂŒr time-constrained Complex Job Shops vorgestellt. Unter Verwendung einer jederzeit aktuellen Replikation des realen Systems werden sowohl je ein uni-, multivariates Zeitreihenmodell als auch ein digitaler Zwilling genutzt, um Vorhersagen ĂŒber die Verletzung dieser time-constraints zu erhalten. In einem zweiten Schritt wird auf der Grundlage der Erwartung von ZeitĂŒberschreitungen die Produktionssteuerung abgeleitet und mit Echtzeitdaten anhand eines realen Halbleiterwerks implementiert. Der daraus resultierende Ansatz wird gemeinsam mit dem Stand der Technik validiert und zeigt signifikante Verbesserungen, da viele Verletzungen von time-constraints verhindert werden können. ZukĂŒnftig soll die intelligente Produktionssteuerung daher in weiteren Complex Job Shop-Umgebungen evaluiert und ausgerollt werden

    Probabilistic and parallel algorithms for centroidal Voronoi tessellations with application to meshless computing and numerical analysis on surfaces

    Get PDF
    Centroidal Voronoi tessellations (CVT) are Voronoi tessellations of a region such that the generating points of the tessellations are also the centroids of the corresponding Voronoi regions. Such tessellations are of use in very diverse applications, including data compression, clustering analysis, cell biology, territorial behavior of animals, optimal allocation of resources, and grid generation. A detailed review is given in chapter 1. In chapter 2, some probabilistic methods for determining centroidal Voronoi tessellations and their parallel implementation on distributed memory systems are presented. The results of computational experiments performed on a CRAY T3E-600 system are given for each algorithm. These demonstrate the superior sequential and parallel performance of a new algorithm we introduce. Then, new algorithms are presented in chapter 3 for the determination of point sets and associated support regions that can then be used in meshless computing methods. The algorithms are probabilistic in nature so that they are totally meshfree, i.e., they do not require, at any stage, the use of any coarse or fine boundary conforming or superimposed meshes. Computational examples are provided that show, for both uniform and non-uniform point distributions that the algorithms result in high-quality point sets and high-quality support regions. The extensions of centroidal Voronoi tessellations to general spaces and sets are also available. For example, tessellations of surfaces in a Euclidean space may be considered. In chapter 4, a precise definition of such constrained centroidal Voronoi tessellations (CCVT\u27s) is given and a number of their properties are derived, including their characterization as minimizers of a kind of energy. Deterministic and probabilistic algorithms for the construction of CCVT\u27s are presented and some analytical results for one of the algorithms are given. Some computational examples are provided which serve to illustrate the high quality of CCVT point sets. CCVT point sets are also applied to polynomial interpolation and numerical integration on the sphere. Finally, some conclusions are given in chapter 5

    The Evaluation and Analysis of The Impact of Variability on The Twelve Workstation Factory

    Get PDF
    Presently, there is theory and work examples about factory performance presented in the book ―Optimizing factory Performance‖ by James Ignizio, Ph.D. The theory explains how to determine performance using three fundamental equations by considering the impact and propagation of variability through a 12-Workstation Factory Model, and a number of its variations. The objective of developing a simulation model was to serve as a tool for instructors to illustrate the impact and propagation of variability in the manufacturing industry and to investigate phenomena (i.e., time to reach steady state, impact of variability) in the three dimensions of manufacturing. The validation of the concepts and equations related to the 12- Workstation Factory Model [Ignizio 2009] was conducted using an object oriented Discrete Event Simulation Software Platform (ExtendSim), combined with the creation of several algorithms that allowed artificial introduction of variability and randomness into the model

    Simulation of production scheduling in manufacturing systems

    Get PDF
    Research into production scheduling environments has been primarily concerned with developing local priority rules for selecting jobs from a queue to be processed on a set of individual machines. Most of the research deals with the scheduling problems in terms of the evaluation of priority rules with respect to given criteria. These criteria have a direct effect on the production cost, such as mean make-span, flow-time, job lateness, m-process inventory and machine idle time. The project under study consists of the following two phases. The first is to deal with the development of computer models for the flow-shop problem, which obtain the optimum make-span and near-optimum solutions for the well-used criteria in the production scheduling priority rules. The second is to develop experimental analysis using a simulation technique, for the two main manufacturing systems, 1. Job-shop 2. Flexible Manufacturing System The two manufacturing types were investigated under the following conditions i. Dynamic problem conditions ii. Different operation time distributions iii. Different shop loads iv. Seven replications per experiment with different streams of random number v. The approximately steady state point for each replication was obtained. In the FMS, the material handling system used was the automated guided Vehicles (AGVs), buffer station and load/ unload area were also used. The aim of these analyses is to deal with the effectiveness of the priority rules on the selected criteria performance. The SIMAN software simulation was used for these studies

    Fluid transport through porous media: A novel application of kinetic Monte Carlo simulations

    Get PDF
    With increasing global energy demands, unconventional formations, such as shale rocks, are becoming an important source of natural gas. Current efforts are focused on understanding fluid dynamics to maximise natural gas yields. Although shale gas is playing an increasingly important role in the global energy industry, our knowledge of the fundamentals of fluid transport through multiscale and heterogeneous porous media is incomplete, as both static and dynamic properties of confined fluids differ tremendously from those at the macroscopic scale. Transport models, derived from atomistic studies, are frequently used to bridge this gap. However, capturing and upscaling the interactions between the pore surface and fluids remains challenging. In this thesis, a computationally efficient stochastic approach is implemented to simulate fluid transport through complex porous media. One-, two-, and three-dimensional kinetic Monte Carlo models were developed to predict methane transport in heterogeneous pore networks consisting of hydrated and water-free micro-, meso-, and macropores, representative of shale rock minerals. Molecular dynamics (MD) simulations, experimental imaging and adsorption data, which describe the surface – fluid interaction and the pore network features respectively were utilised to inform the KMC models. The stochastic approach was used to (1) quantify the effect of the pore network characteristics (pore size, chemistry, connectivity, porosity, and anisotropy) on the transport of supercritical methane, (2) estimate the permeability of an Eagle Ford shale sample and evaluate the effect of proppants on permeability, and (3) to upscale atomistic insights and predict fluid diffusivity through different size pores. The results obtained were consistent with the analytical solutions of the diffusion equation, experimental data, and MD simulations, respectively, demonstrating the effectiveness of the stochastic approach. In addition, the applicability of less computationally intensive deterministic approaches was examined using multiple case studies; recommendations are provided on the optimal conditions under which each method can be used
    • 

    corecore