483 research outputs found

    Reconfigurable architecture for very large scale microelectronic systems

    Get PDF

    Quantifying fault recovery in multiprocessor systems

    Get PDF
    Various aspects of reliable computing are formalized and quantified with emphasis on efficient fault recovery. The mathematical model which proves to be most appropriate is provided by the theory of graphs. New measures for fault recovery are developed and the value of elements of the fault recovery vector are observed to depend not only on the computation graph H and the architecture graph G, but also on the specific location of a fault. In the examples, a hypercube is chosen as a representative of parallel computer architecture, and a pipeline as a typical configuration for program execution. Dependability qualities of such a system is defined with or without a fault. These qualities are determined by the resiliency triple defined by three parameters: multiplicity, robustness, and configurability. Parameters for measuring the recovery effectiveness are also introduced in terms of distance, time, and the number of new, used, and moved nodes and edges

    On Energy Efficient Computing Platforms

    Get PDF
    In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.Siirretty Doriast

    VLSI architectures for high speed Fourier transform processing

    Get PDF

    The digital data processing concepts of the LOFT mission

    Full text link
    The Large Observatory for X-ray Timing (LOFT) is one of the five mission candidates that were considered by ESA for an M3 mission (with a launch opportunity in 2022 - 2024). LOFT features two instruments: the Large Area Detector (LAD) and the Wide Field Monitor (WFM). The LAD is a 10 m 2 -class instrument with approximately 15 times the collecting area of the largest timing mission so far (RXTE) for the first time combined with CCD-class spectral resolution. The WFM will continuously monitor the sky and recognise changes in source states, detect transient and bursting phenomena and will allow the mission to respond to this. Observing the brightest X-ray sources with the effective area of the LAD leads to enormous data rates that need to be processed on several levels, filtered and compressed in real-time already on board. The WFM data processing on the other hand puts rather low constraints on the data rate but requires algorithms to find the photon interaction location on the detector and then to deconvolve the detector image in order to obtain the sky coordinates of observed transient sources. In the following, we want to give an overview of the data handling concepts that were developed during the study phase.Comment: Proc. SPIE 9144, Space Telescopes and Instrumentation 2014: Ultraviolet to Gamma Ray, 91446

    Reliability-aware and energy-efficient system level design for networks-on-chip

    Get PDF
    2015 Spring.Includes bibliographical references.With CMOS technology aggressively scaling into the ultra-deep sub-micron (UDSM) regime and application complexity growing rapidly in recent years, processors today are being driven to integrate multiple cores on a chip. Such chip multiprocessor (CMP) architectures offer unprecedented levels of computing performance for highly parallel emerging applications in the era of digital convergence. However, a major challenge facing the designers of these emerging multicore architectures is the increased likelihood of failure due to the rise in transient, permanent, and intermittent faults caused by a variety of factors that are becoming more and more prevalent with technology scaling. On-chip interconnect architectures are particularly susceptible to faults that can corrupt transmitted data or prevent it from reaching its destination. Reliability concerns in UDSM nodes have in part contributed to the shift from traditional bus-based communication fabrics to network-on-chip (NoC) architectures that provide better scalability, performance, and utilization than buses. In this thesis, to overcome potential faults in NoCs, my research began by exploring fault-tolerant routing algorithms. Under the constraint of deadlock freedom, we make use of the inherent redundancy in NoCs due to multiple paths between packet sources and sinks and propose different fault-tolerant routing schemes to achieve much better fault tolerance capabilities than possible with traditional routing schemes. The proposed schemes also use replication opportunistically to optimize the balance between energy overhead and arrival rate. As 3D integrated circuit (3D-IC) technology with wafer-to-wafer bonding has been recently proposed as a promising candidate for future CMPs, we also propose a fault-tolerant routing scheme for 3D NoCs which outperforms the existing popular routing schemes in terms of energy consumption, performance and reliability. To quantify reliability and provide different levels of intelligent protection, for the first time, we propose the network vulnerability factor (NVF) metric to characterize the vulnerability of NoC components to faults. NVF determines the probabilities that faults in NoC components manifest as errors in the final program output of the CMP system. With NVF aware partial protection for NoC components, almost 50% energy cost can be saved compared to the traditional approach of comprehensively protecting all NoC components. Lastly, we focus on the problem of fault-tolerant NoC design, that involves many NP-hard sub-problems such as core mapping, fault-tolerant routing, and fault-tolerant router configuration. We propose a novel design-time (RESYN) and a hybrid design and runtime (HEFT) synthesis framework to trade-off energy consumption and reliability in the NoC fabric at the system level for CMPs. Together, our research in fault-tolerant NoC routing, reliability modeling, and reliability aware NoC synthesis substantially enhances NoC reliability and energy-efficiency beyond what is possible with traditional approaches and state-of-the-art strategies from prior work

    Fault- and Yield-Aware On-Chip Memory Design and Management

    Get PDF
    Ever decreasing device size causes more frequent hard faults, which becomes a serious burden to processor design and yield management. This problem is particularly pronounced in the on-chip memory which consumes up to 70% of a processor' s total chip area. Traditional circuit-level techniques, such as redundancy and error correction code, become less effective in error-prevalent environments because of their large area overhead. In this work, we suggest an architectural solution to building reliable on-chip memory in the future processor environment. Our approaches have two parts, a design framework and architectural techniques for on-chip memory structures. Our design framework provides important architectural evaluation metrics such as yield, area, and performance based on low level defects and process variations parameters. Processor architects can quickly evaluate their designs' characteristics in terms of yield, area, and performance. With the framework, we develop architectural yield enhancement solutions for on-chip memory structures including L1 cache, L2 cache and directory memory. Our proposed solutions greatly improve yield with negligible area and performance overhead. Furthermore, we develop a decoupled yield model of compute cores and L2 caches in CMPs, which show that there will be many more L2 caches than compute cores in a chip. We propose efficient utilization techniques for excess caches. Evaluation results show that excess caches significantly improve overall performance of CMPs

    Investigation into yield and reliability enhancement of TSV-based three-dimensional integration circuits

    No full text
    Three dimensional integrated circuits (3D ICs) have been acknowledged as a promising technology to overcome the interconnect delay bottleneck brought by continuous CMOS scaling. Recent research shows that through-silicon-vias (TSVs), which act as vertical links between layers, pose yield and reliability challenges for 3D design. This thesis presents three original contributions.The first contribution presents a grouping-based technique to improve the yield of 3D ICs under manufacturing TSV defects, where regular and redundant TSVs are partitioned into groups. In each group, signals can select good TSVs using rerouting multiplexers avoiding defective TSVs. Grouping ratio (regular to redundant TSVs in one group) has an impact on yield and hardware overhead. Mathematical probabilistic models are presented for yield analysis under the influence of independent and clustering defect distributions. Simulation results using MATLAB show that for a given number of TSVs and TSV failure rate, careful selection of grouping ratio results in achieving 100% yield at minimal hardware cost (number of multiplexers and redundant TSVs) in comparison to a design that does not exploit TSV grouping ratios. The second contribution presents an efficient online fault tolerance technique based on redundant TSVs, to detect TSV manufacturing defects and address thermal-induced reliability issue. The proposed technique accounts for both fault detection and recovery in the presence of three TSV defects: voids, delamination between TSV and landing pad, and TSV short-to-substrate. Simulations using HSPICE and ModelSim are carried out to validate fault detection and recovery. Results show that regular and redundant TSVs can be divided into groups to minimise area overhead without affecting the fault tolerance capability of the technique. Synthesis results using 130-nm design library show that 100% repair capability can be achieved with low area overhead (4% for the best case). The last contribution proposes a technique with joint consideration of temperature mitigation and fault tolerance without introducing additional redundant TSVs. This is achieved by reusing spare TSVs that are frequently deployed for improving yield and reliability in 3D ICs. The proposed technique consists of two steps: TSV determination step, which is for achieving optimal partition between regular and spare TSVs into groups; The second step is TSV placement, where temperature mitigation is targeted while optimizing total wirelength and routing difference. Simulation results show that using the proposed technique, 100% repair capability is achieved across all (five) benchmarks with an average temperature reduction of 75.2? (34.1%) (best case is 99.8? (58.5%)), while increasing wirelength by a small amount

    The Honeycomb Architecture: Prototype Analysis and Design

    Get PDF
    Due to the inherent potential of parallel processing, a lot of attention has focused on massively parallel computer architecture. To a large extent, the performance of a massively parallel architecture is a function of the flexibility of its communication network. The ability to configure the topology of the machine determines the ease with which problems are mapped onto the architecture. If the machine is sufficiently flexible, the architecture can be configured to match the natural structure of a wide range of problems. There are essentially four unique types of massively parallel architectures: 1. Cellular Arrays 2. Lattice Architectures [21, 30] 3. Connection Architectures [19] 4. Honeycomb Architectures [24] All four architectures are classified as SIMD. Each, however, offers a slightly different solution to the mapping problem. The first three approaches are characterized by easily distinguishable processor, communication, and memory components. In contrast, the Honeycomb architecture contains multipurpose processing/communication/memory cells. Each cell can function as either a simple CPU, a memory cell, or an element of a communication bus. The conventional approach to massive parallelism is the cellular array. It typically consists of an array of processing elements arranged in a mesh pattern with hard wired connections between neighboring processors. Due to their fixed topology, cellular arrays impose severe limitations upon interprocessor communication. The lattice architecture is a somewhat more flexible approach to massive parallelism. It consists of a lattice of processing elements embedded in an array of simple switching elements. The switching elements form a programmable interconnection network. A lattice architecture can be configured in a number of different topologies, but it is still only a partial solution to the mapping problem. The connection architecture offers a comprehensive solution to the mapping problem. It consists of a cellular array integrated into a packet-switched communication network. The network provides transparent communication between all processing elements. Note that the communication network is physically abstracted from the processor array, allowing the processors to evolve independently of the network. The Honeycomb architecture offers a unique solution to the mapping problem. It consists of an array of identical processing/communication/memory cells. Each cell can function as either a processor cell, a communication cell, or a memory cell. Collections of Honeycomb cells can be grouped into multicell CPUs, multi-cell memories, or multi-cell CPU-memory systems. Multi-cell CPU-memory systems are hereafter referred to as processing clusters. The topology of the Honeycomb is determined at compilation time. During a preprocessing phase, the Honeycomb is adjusted to the desired topology. The Honeycomb cell is extremely simple, capable of only simple arithmetic and logic operations. The simplicity of the Honeycomb cell is the key to the Honeycomb concept. As indicated in [24], there are two main research avenues to pursue in furthering the Honeycomb concept: 1. Analyzing the design of a uniform Honeycomb cell 2. Mapping algorithms onto the Honeycomb architecture This technical report concentrates on the first issue. While alluded to throughout the report, the second issue is not addressed in any detail
    • …
    corecore