2,040 research outputs found

    A multiple performance analysis of market-capacity integration policies

    Get PDF
    A model that uses simulation augmented with Design of Experiments (DOE) is presented to analyse the performance of a Make-to-Order (MTO) reconfigurable manufacturing system with scalable capacity. Unlike the classical capacity scaling policies, the proposed hybrid capacity scaling policy is determined using multiple performance measures that reflect cost, internal stability and responsiveness. The impact of both tactical capacity and marketing policies and their interaction on the overall performance was analysed using DOE techniques and real case data. In addition to the different insights about the trade-offs involved in capacity planning decisions, the presented results challenged the conventional capacity planning wisdoms in MTO about the negative role of the capacity scalability delay time. Finally the analysis demonstrated the importance of inter-functional integration between capacity and marketing policies

    Dynamic modelling of reconfigurable manufacturing planning and control systems using supervisory control

    Get PDF
    This research is concerned with studying the dynamic performance of reconfigurable Manufacturing Planning and Control (MPC) systems. Such goal requires two main tasks. The first task is to develop a dynamic MPC system model that has the ability to reconfigure to different MPC policies. The second task is to design a supervisory control unit that has as input the high level strategic market decisions and constraints together with a feedback of the current manufacturing system state and then select the optimal suitable operation mode or policy at these conditions. This paper addresses the first task of the proposed research and presents and analyses a dynamic reconfigurable MPC model. The response of the developed model to sudden demand changes under different parameters settings is analyzed. In addition, the stability limits of the system are also studied. The results give a better understanding of the dynamics of reconfigurable MPC systems and the different trade-off decisions required when selecting an MPC policy and the limits for parameters settings. These results represent the first step towards designing the supervisory control unit which will be responsible for managing the reconfiguration of the whole system

    Capacity management of modular assembly systems

    Get PDF
    Companies handling large product portfolio often face challenges that stem from market dynamics. Therefore, in production management, efficient planning approaches are required that are able to cope with the variability of the order stream to maintain the desired rate of production. Modular assembly systems offer a flexible approach to react to these changes, however, there is no all-encompassing methodology yet to support long and medium term capacity management of these systems. The paper introduces a novel method for the management of product variety in assembly systems, by applying a new conceptual framework that supports the periodic revision of the capacity allocation and determines the proper system configuration. The framework has a hierarchical structure to support the capacity and production planning of the modular assembly systems both on the long and medium term horizons. On the higher level, a system configuration problem is solved to assign the product families to dedicated, flexible or reconfigurable resources, considering the uncertainty of the demand volumes. The lower level in the hierarchy ensures the cost optimal production planning of the system by optimizing the lot sizes as well as the required number of resources. The efficiency of the proposed methodology is demonstrated through the results of an industrial case study from the automotive sector. © 2017 The Society of Manufacturing Engineer

    Dynamic analysis of agile manufacturing planning and control (MPC) systems using control theory

    Get PDF

    Massive Data-Centric Parallelism in the Chiplet Era

    Full text link
    Traditionally, massively parallel applications are executed on distributed systems, where computing nodes are distant enough that the parallelization schemes must minimize communication and synchronization to achieve scalability. Mapping communication-intensive workloads to distributed systems requires complicated problem partitioning and dataset pre-processing. With the current AI-driven trend of having thousands of interconnected processors per chip, there is an opportunity to re-think these communication-bottlenecked workloads. This bottleneck often arises from data structure traversals, which cause irregular memory accesses and poor cache locality. Recent works have introduced task-based parallelization schemes to accelerate graph traversal and other sparse workloads. Data structure traversals are split into tasks and pipelined across processing units (PUs). Dalorex demonstrated the highest scalability (up to thousands of PUs on a single chip) by having the entire dataset on-chip, scattered across PUs, and executing the tasks at the PU where the data is local. However, it also raised questions on how to scale to larger datasets when all the memory is on chip, and at what cost. To address these challenges, we propose a scalable architecture composed of a grid of Data-Centric Reconfigurable Array (DCRA) chiplets. Package-time reconfiguration enables creating chip products that optimize for different target metrics, such as time-to-solution, energy, or cost, while software reconfigurations avoid network saturation when scaling to millions of PUs across many chip packages. We evaluate six applications and four datasets, with several configurations and memory technologies, to provide a detailed analysis of the performance, power, and cost of data-local execution at scale. Our parallelization of Breadth-First-Search with RMAT-26 across a million PUs reaches 3323 GTEPS

    CHANGE-READY MPC SYSTEMS AND PROGRESSIVE MODELING: VISION, PRINCIPLES, AND APPLICATIONS

    Get PDF
    The last couple of decades have witnessed a level of fast-paced development of new ideas, products, manufacturing technologies, manufacturing practices, customer expectations, knowledge transition, and civilization movements, as it has never before. In today\u27s manufacturing world, change became an intrinsic characteristic that is addressed everywhere. How to deal with change, how to manage it, how to bind to it, how to steer it, and how to create a value out of it, were the key drivers that brought this research to existence. Change-Ready Manufacturing Planning and Control (CMPC) systems are presented as the first answer. CMPC characteristics, change drivers, and some principles of Component-Based Software Engineering (CBSE) are interwoven to present a blueprint of a new framework and mind-set in the manufacturing planning and control field, CMPC systems. In order to step further and make the internals of CMPC systems/components change-ready, an enabling modeling approach was needed. Progressive Modeling (PM), a forward-looking multi-disciplinary modeling approach, is developed in order to modernize the modeling process of today\u27s complex industrial problems and create pragmatic solutions for them. It is designed to be pragmatic, highly sophisticated, and revolves around many seminal principles that either innovated or imported from many disciplines: Systems Analysis and Design, Software Engineering, Advanced Optimization Algorisms, Business Concepts, Manufacturing Strategies, Operations Management, and others. Problems are systemized, analyzed, componentized; their logic and their solution approaches are redefined to make them progressive (ready to change, adapt, and develop further). Many innovations have been developed in order to enrich the modeling process and make it a well-assorted toolkit able to address today\u27s tougher, larger, and more complex industrial problems. PM brings so many novel gadgets in its toolbox: function templates, advanced notation, cascaded mathematical models, mathematical statements, society of decision structures, couplers--just to name a few. In this research, PM has been applied to three different applications: a couple of variants of Aggregate Production Planning (APP) Problem and the novel Reconfiguration and Operations Planning (ROP) problem. The latest is pioneering in both the Reconfigurable Manufacturing and the Operations Management fields. All the developed models, algorithms, and results reveal that the new analytical and computational power gained by PM development and demonstrate its ability to create a new generation of unmatched large scale and scope system problems and their integrated solutions. PM has the potential to be instrumental toolkit in the development of Reconfigurable Manufacturing Systems. In terms of other potential applications domain, PM is about to spark a new paradigm in addressing large-scale system problems of many engineering and scientific fields in a highly pragmatic way without losing the scientific rigor

    A survey of emerging architectural techniques for improving cache energy consumption

    Get PDF
    The search goes on for another ground breaking phenomenon to reduce the ever-increasing disparity between the CPU performance and storage. There are encouraging breakthroughs in enhancing CPU performance through fabrication technologies and changes in chip designs but not as much luck has been struck with regards to the computer storage resulting in material negative system performance. A lot of research effort has been put on finding techniques that can improve the energy efficiency of cache architectures. This work is a survey of energy saving techniques which are grouped on whether they save the dynamic energy, leakage energy or both. Needless to mention, the aim of this work is to compile a quick reference guide of energy saving techniques from 2013 to 2016 for engineers, researchers and students

    Autonomous Recovery Of Reconfigurable Logic Devices Using Priority Escalation Of Slack

    Get PDF
    Field Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases. To extend these concepts to semiconductor aging and process variation in the deep submicron era, resilient adaptable processing systems are sought to maintain quality and throughput requirements despite the vulnerabilities of the underlying computational devices. A new approach to autonomous fault-handling which addresses these goals is developed using only a uniplex hardware arrangement. It operates by observing a health metric to achieve Fault Demotion using Recon- figurable Slack (FaDReS). Here an autonomous fault isolation scheme is employed which neither requires test vectors nor suspends the computational throughput, but instead observes the value of a health metric based on runtime input. The deterministic flow of the fault isolation scheme guarantees success in a bounded number of reconfigurations of the FPGA fabric. FaDReS is then extended to the Priority Using Resource Escalation (PURE) online redundancy scheme which considers fault-isolation latency and throughput trade-offs under a dynamic spare arrangement. While deep-submicron designs introduce new challenges, use of adaptive techniques are seen to provide several promising avenues for improving resilience. The scheme developed is demonstrated by hardware design of various signal processing circuits and their implementation on a Xilinx Virtex-4 FPGA device. These include a Discrete Cosine Transform (DCT) core, Motion Estimation (ME) engine, Finite Impulse Response (FIR) Filter, Support Vector Machine (SVM), and Advanced Encryption Standard (AES) blocks in addition to MCNC benchmark circuits. A iii significant reduction in power consumption is achieved ranging from 83% for low motion-activity scenes to 12.5% for high motion activity video scenes in a novel ME engine configuration. For a typical benchmark video sequence, PURE is shown to maintain a PSNR baseline near 32dB. The diagnosability, reconfiguration latency, and resource overhead of each approach is analyzed. Compared to previous alternatives, PURE maintains a PSNR within a difference of 4.02dB to 6.67dB from the fault-free baseline by escalating healthy resources to higher-priority signal processing functions. The results indicate the benefits of priority-aware resiliency over conventional redundancy approaches in terms of fault-recovery, power consumption, and resource-area requirements. Together, these provide a broad range of strategies to achieve autonomous recovery of reconfigurable logic devices under a variety of constraints, operating conditions, and optimization criteria
    corecore