3,346 research outputs found

    New Shop Floor Control Approaches for Virtual Enterprises

    Get PDF
    The virtual enterprise paradigm seems a fit response to face market instability and the volatile nature of business opportunities increasing enterprise’s interest in similar forms of networked organisations. The dynamic environment of a virtual enterprise requires that partners in the consortium own reconfigurable shop floors. This paper presents new approaches to shop floor control that meet the requirements of the new industrial paradigms and argues on work re-organization at shop floor level.virtual enterprise; networked organisations

    A methodological approach for designing and sequencing product families in Reconfigurable Disassembly Systems

    Get PDF
    Purpose: A Reconfigurable Disassembly System (RDS) represents a new paradigm of automated disassembly system that uses reconfigurable manufacturing technology for fast adaptation to changes in the quantity and mix of products to disassemble. This paper deals with a methodology for designing and sequencing product families in RDS. Design/methodology/approach: The methodology is developed in a two-phase approach, where products are first grouped into families and then families are sequenced through the RDS, computing the required machines and modules configuration for each family. Products are grouped into families based on their common features using a Hierarchical Clustering Algorithm. The optimal sequence of the product families is calculated using a Mixed-Integer Linear Programming model minimizing reconfigurability and operational costs. Findings: This paper is focused to enable reconfigurable manufacturing technologies to attain some degree of adaptability during disassembly automation design using modular machine tools. Research limitations/implications: The MILP model proposed for the second phase is similar to the well-known Travelling Salesman Problem (TSP) and therefore its complexity grows exponentially with the number of products to disassemble. In real-world problems, which a higher number of products, it may be advisable to solve the model approximately with heuristics. Practical implications: The importance of industrial recycling and remanufacturing is growing due to increasing environmental and economic pressures. Disassembly is an important part of remanufacturing systems for reuse and recycling purposes. Automatic disassembly techniques have a growing number of applications in the area of electronics, aerospace, construction and industrial equipment. In this paper, a design and scheduling approach is proposed to apply in this area. Originality/value: This paper presents a new concept called Reconfigurable Disassembly System, which represents disassembly systems with reusability, scalability, agility and reconfigurability features. These features and some specific costs are considered as part of the proposed methodology

    A methodological approach for designing and sequencing product families in Reconfigurable Disassembly Systems

    Get PDF
    Purpose: A Reconfigurable Disassembly System (RDS) represents a new paradigm of automated disassembly system that uses reconfigurable manufacturing technology for fast adaptation to changes in the quantity and mix of products to disassemble. This paper deals with a methodology for designing and sequencing product families in RDS. Design/methodology/approach: The methodology is developed in a two-phase approach, where products are first grouped into families and then families are sequenced through the RDS, computing the required machines and modules configuration for each family. Products are grouped into families based on their common features using a Hierarchical Clustering Algorithm. The optimal sequence of the product families is calculated using a Mixed-Integer Linear Programming model minimizing reconfigurability and operational costs. Findings: This paper is focused to enable reconfigurable manufacturing technologies to attain some degree of adaptability during disassembly automation design using modular machine tools. Research limitations/implications: The MILP model proposed for the second phase is similar to the well-known Travelling Salesman Problem (TSP) and therefore its complexity grows exponentially with the number of products to disassemble. In real-world problems, which a higher number of products, it may be advisable to solve the model approximately with heuristics. Practical implications: The importance of industrial recycling and remanufacturing is growing due to increasing environmental and economic pressures. Disassembly is an important part of remanufacturing systems for reuse and recycling purposes. Automatic disassembly techniques have a growing number of applications in the area of electronics, aerospace, construction and industrial equipment. In this paper, a design and scheduling approach is proposed to apply in this area. Originality/value: This paper presents a new concept called Reconfigurable Disassembly System, which represents disassembly systems with reusability, scalability, agility and reconfigurability features. These features and some specific costs are considered as part of the proposed methodology.Peer Reviewe

    MorphIC: A 65-nm 738k-Synapse/mm2^2 Quad-Core Binary-Weight Digital Neuromorphic Processor with Stochastic Spike-Driven Online Learning

    Full text link
    Recent trends in the field of neural network accelerators investigate weight quantization as a means to increase the resource- and power-efficiency of hardware devices. As full on-chip weight storage is necessary to avoid the high energy cost of off-chip memory accesses, memory reduction requirements for weight storage pushed toward the use of binary weights, which were demonstrated to have a limited accuracy reduction on many applications when quantization-aware training techniques are used. In parallel, spiking neural network (SNN) architectures are explored to further reduce power when processing sparse event-based data streams, while on-chip spike-based online learning appears as a key feature for applications constrained in power and resources during the training phase. However, designing power- and area-efficient spiking neural networks still requires the development of specific techniques in order to leverage on-chip online learning on binary weights without compromising the synapse density. In this work, we demonstrate MorphIC, a quad-core binary-weight digital neuromorphic processor embedding a stochastic version of the spike-driven synaptic plasticity (S-SDSP) learning rule and a hierarchical routing fabric for large-scale chip interconnection. The MorphIC SNN processor embeds a total of 2k leaky integrate-and-fire (LIF) neurons and more than two million plastic synapses for an active silicon area of 2.86mm2^2 in 65nm CMOS, achieving a high density of 738k synapses/mm2^2. MorphIC demonstrates an order-of-magnitude improvement in the area-accuracy tradeoff on the MNIST classification task compared to previously-proposed SNNs, while having no penalty in the energy-accuracy tradeoff.Comment: This document is the paper as accepted for publication in the IEEE Transactions on Biomedical Circuits and Systems journal (2019), the fully-edited paper is available at https://ieeexplore.ieee.org/document/876400

    An Analog VLSI Deep Machine Learning Implementation

    Get PDF
    Machine learning systems provide automated data processing and see a wide range of applications. Direct processing of raw high-dimensional data such as images and video by machine learning systems is impractical both due to prohibitive power consumption and the “curse of dimensionality,” which makes learning tasks exponentially more difficult as dimension increases. Deep machine learning (DML) mimics the hierarchical presentation of information in the human brain to achieve robust automated feature extraction, reducing the dimension of such data. However, the computational complexity of DML systems limits large-scale implementations in standard digital computers. Custom analog signal processing (ASP) can yield much higher energy efficiency than digital signal processing (DSP), presenting means of overcoming these limitations. The purpose of this work is to develop an analog implementation of DML system. First, an analog memory is proposed as an essential component of the learning systems. It uses the charge trapped on the floating gate to store analog value in a non-volatile way. The memory is compatible with standard digital CMOS process and allows random-accessible bi-directional updates without the need for on-chip charge pump or high voltage switch. Second, architecture and circuits are developed to realize an online k-means clustering algorithm in analog signal processing. It achieves automatic recognition of underlying data pattern and online extraction of data statistical parameters. This unsupervised learning system constitutes the computation node in the deep machine learning hierarchy. Third, a 3-layer, 7-node analog deep machine learning engine is designed featuring online unsupervised trainability and non-volatile floating-gate analog storage. It utilizes massively parallel reconfigurable current-mode analog architecture to realize efficient computation. And algorithm-level feedback is leveraged to provide robustness to circuit imperfections in analog signal processing. At a processing speed of 8300 input vectors per second, it achieves 1×1012 operation per second per Watt of peak energy efficiency. In addition, an ultra-low-power tunable bump circuit is presented to provide similarity measures in analog signal processing. It incorporates a novel wide-input-range tunable pseudo-differential transconductor. The circuit demonstrates tunability of bump center, width and height with a power consumption significantly lower than previous works

    Energy-Efficient Inference Accelerator for Memory-Augmented Neural Networks on an FPGA

    Full text link
    Memory-augmented neural networks (MANNs) are designed for question-answering tasks. It is difficult to run a MANN effectively on accelerators designed for other neural networks (NNs), in particular on mobile devices, because MANNs require recurrent data paths and various types of operations related to external memory access. We implement an accelerator for MANNs on a field-programmable gate array (FPGA) based on a data flow architecture. Inference times are also reduced by inference thresholding, which is a data-based maximum inner-product search specialized for natural language tasks. Measurements on the bAbI data show that the energy efficiency of the accelerator (FLOPS/kJ) was higher than that of an NVIDIA TITAN V GPU by a factor of about 125, increasing to 140 with inference thresholdingComment: Accepted to DATE 201

    Temperature Evaluation of NoC Architectures and Dynamically Reconfigurable NoC

    Get PDF
    Advancements in the field of chip fabrication led to the integration of a large number of transistors in a small area, giving rise to the multi–core processor era. Massive multi–core processors facilitate innovation and research in the field of healthcare, defense, entertainment, meteorology and many others. Reduction in chip area and increase in the number of on–chip cores is accompanied by power and temperature issues. In high performance multi–core chips, power and heat are predominant constraints. High performance massive multicore systems suffer from thermal hotspots, exacerbating the problem of reliability in deep submicron technologies. High power consumption not only increases the chip temperature but also jeopardizes the integrity of the system. Hence, there is a need to explore holistic power and thermal optimization and management strategies for massive on–chip multi–core environments. In multi–core environments, the communication fabric plays a major role in deciding the efficiency of the system. In multi–core processor chips this communication infrastructure is predominantly a Network–on–Chip (NoC). Tradition NoC designs incorporate planar interconnects as a result these NoCs have long, multi–hop wireline links for data exchange. Due to the presence of multi–hop planar links such NoC architectures fall prey to high latency, significant power dissipation and temperature hotspots. Networks inspired from nature are envisioned as an enabling technology to achieve highly efficient and low power NoC designs. Adopting wireless technology in such architectures enhance their performance. Placement of wireless interconnects (WIs) alters the behavior of the network and hence a random deployment of WIs may not result in a thermally optimal solution. In such scenarios, the WIs being highly efficient would attract high traffic densities resulting in thermal hotspots. Hence, the location and utilization of the wireless links is a key factor in obtaining a thermal optimal highly efficient Network–on–chip. Optimization of the NoC framework alone is incapable of addressing the effects due to the runtime dynamics of the system. Minimal paths solely optimized for performance in the network may lead to excessive utilization of certain NoC components leading to thermal hotspots. Hence, architectural innovation in conjunction with suitable power and thermal management strategies is the key for designing high performance and energy–efficient multicore systems. This work contributes at exploring various wired and wireless NoC architectures that achieve best trade–offs between temperature, performance and energy–efficiency. It further proposes an adaptive routing scheme which factors in the thermal profile of the chip. The proposed routing mechanism dynamically reacts to the thermal profile of the chip and takes measures to avoid thermal hotspots, achieving a thermally efficient dynamically reconfigurable network on chip architecture
    • 

    corecore