19,348 research outputs found

    The Integration of Process Planning and Shop Floor Scheduling in Small Batch Part Manufacturing

    Get PDF
    In this paper we explore possibilities to cut manufacturing leadtimes and to improve delivery performance in a small batch part manufacturing shop by integrating process planning and shop floor scheduling. Using a set of initial process plans (one for each order in the shop), we exploit a resource decomposition procedure to determine schedules to determine schedules which minimize the maximum lateness, given these process plans. If the resulting schedule is still unsatisfactory, a critical path analysis is performed to select jobs as candidates for alternative process plans. In this way, an excellent due date performance can be achieved, with a minimum of process planning and scheduling effort

    Organizing Multidisciplinary Care for Children with Neuromuscular Diseases

    Get PDF
    The Academic Medical Center (AMC) in Amsterdam, The Netherlands, recently opened the `Children's Muscle Center Amsterdam' (CMCA). The CMCA diagnoses and treats children with neuromuscular diseases. These patients require care from a variety of clinicians. Through the establishment of the CMCA, children and their parents will generally visit the hospital only once a year, while previously they visited on average six times a year. This is a major improvement, because the hospital visits are both physically and psychologically demanding for the patients. This article describes how quantitative modelling supports the design and operations of the CMCA. First, an integer linear program is presented that selects which patients to invite for a treatment day and schedules the required combination of consultations, examinations and treatments on one day. Second, the integer linear program is used as input to a simulation to study to estimate the capacity of the CMCA, expressed in the distribution of the number patients that can be seen on one diagnosis day. Finally, a queueing model is formulated to predict the access time distributions based upon the simulation outcomes under various demand scenarios

    Production planning systems for cellular manufacturing

    Get PDF
    New product development is one of the most powerful but difficult activities in business. It is also a very important factor affecting final product quality. There are many techniques available for new product development. Experimental design is now regarded as one of the most significant techniques. In this article, we will discuss how to use the technique of experimental design in developing a new product - an extrusion press. In order to provide a better understanding of this specific process, a brief description of the extrusion press is presented. To ensure the successful development of the extrusion press, customer requirements and expectations were obtained by detailed market research. The critical and non-critical factors affecting the performance of the extrusion press were identified in preliminary experiments. Through conducting single factorial experiments, the critical factorial levels were determined. The relationships between the performance indexes of the extrusion press and the four critical factors were determined on the basis of multi-factorial experiments. The mathematical models for the performance of the extrusion press were established according to a central composite rotatable design. The best combination of the four critical factors and the optimum performance indexes were determined by optimum design. The results were verified by conducting a confirmatory experiment. Finally, a number of conclusions became evident.

    An application of a cocitation-analysis method to find further research possibilities on the area of scheduling problems

    Get PDF
    In this article we will give firstly a classification scheme of scheduling problems and their solving methods. The main aspects under examination are the following: machine and secondary resources, constraints, objective functions, uncertainty, mathematical models and adapted solution methods. In a second part, based on this scheme, we will examine a corpus of 60 main articles (1015 citation links were recorded in total) in scheduling literature from 1977 to 2009. The main purpose is to discover the underlying themes within the literature and to examine how they have evolved. To identify documents likely to be closely related, we are going to use the cocitation-based method of Greene et al. (2008). Our aim is to build a base of articles in order to extract the much developed research themes and find the less examined ones as well, and then try to discuss the reasons of the poorly investigation of some areas

    Toolflows for Mapping Convolutional Neural Networks on FPGAs: A Survey and Future Directions

    Get PDF
    In the past decade, Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance in various Artificial Intelligence tasks. To accelerate the experimentation and development of CNNs, several software frameworks have been released, primarily targeting power-hungry CPUs and GPUs. In this context, reconfigurable hardware in the form of FPGAs constitutes a potential alternative platform that can be integrated in the existing deep learning ecosystem to provide a tunable balance between performance, power consumption and programmability. In this paper, a survey of the existing CNN-to-FPGA toolflows is presented, comprising a comparative study of their key characteristics which include the supported applications, architectural choices, design space exploration methods and achieved performance. Moreover, major challenges and objectives introduced by the latest trends in CNN algorithmic research are identified and presented. Finally, a uniform evaluation methodology is proposed, aiming at the comprehensive, complete and in-depth evaluation of CNN-to-FPGA toolflows.Comment: Accepted for publication at the ACM Computing Surveys (CSUR) journal, 201

    The CMS Integration Grid Testbed

    Get PDF
    The CMS Integration Grid Testbed (IGT) comprises USCMS Tier-1 and Tier-2 hardware at the following sites: the California Institute of Technology, Fermi National Accelerator Laboratory, the University of California at San Diego, and the University of Florida at Gainesville. The IGT runs jobs using the Globus Toolkit with a DAGMan and Condor-G front end. The virtual organization (VO) is managed using VO management scripts from the European Data Grid (EDG). Gridwide monitoring is accomplished using local tools such as Ganglia interfaced into the Globus Metadata Directory Service (MDS) and the agent based Mona Lisa. Domain specific software is packaged and installed using the Distrib ution After Release (DAR) tool of CMS, while middleware under the auspices of the Virtual Data Toolkit (VDT) is distributed using Pacman. During a continuo us two month span in Fall of 2002, over 1 million official CMS GEANT based Monte Carlo events were generated and returned to CERN for analysis while being demonstrated at SC2002. In this paper, we describe the process that led to one of the world's first continuously available, functioning grids.Comment: CHEP 2003 MOCT01

    PRETZEL: Opening the Black Box of Machine Learning Prediction Serving Systems

    Full text link
    Machine Learning models are often composed of pipelines of transformations. While this design allows to efficiently execute single model components at training time, prediction serving has different requirements such as low latency, high throughput and graceful performance degradation under heavy load. Current prediction serving systems consider models as black boxes, whereby prediction-time-specific optimizations are ignored in favor of ease of deployment. In this paper, we present PRETZEL, a prediction serving system introducing a novel white box architecture enabling both end-to-end and multi-model optimizations. Using production-like model pipelines, our experiments show that PRETZEL is able to introduce performance improvements over different dimensions; compared to state-of-the-art approaches PRETZEL is on average able to reduce 99th percentile latency by 5.5x while reducing memory footprint by 25x, and increasing throughput by 4.7x.Comment: 16 pages, 14 figures, 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI), 201
    corecore