285 research outputs found

    Investigation into yield and reliability enhancement of TSV-based three-dimensional integration circuits

    No full text
    Three dimensional integrated circuits (3D ICs) have been acknowledged as a promising technology to overcome the interconnect delay bottleneck brought by continuous CMOS scaling. Recent research shows that through-silicon-vias (TSVs), which act as vertical links between layers, pose yield and reliability challenges for 3D design. This thesis presents three original contributions.The first contribution presents a grouping-based technique to improve the yield of 3D ICs under manufacturing TSV defects, where regular and redundant TSVs are partitioned into groups. In each group, signals can select good TSVs using rerouting multiplexers avoiding defective TSVs. Grouping ratio (regular to redundant TSVs in one group) has an impact on yield and hardware overhead. Mathematical probabilistic models are presented for yield analysis under the influence of independent and clustering defect distributions. Simulation results using MATLAB show that for a given number of TSVs and TSV failure rate, careful selection of grouping ratio results in achieving 100% yield at minimal hardware cost (number of multiplexers and redundant TSVs) in comparison to a design that does not exploit TSV grouping ratios. The second contribution presents an efficient online fault tolerance technique based on redundant TSVs, to detect TSV manufacturing defects and address thermal-induced reliability issue. The proposed technique accounts for both fault detection and recovery in the presence of three TSV defects: voids, delamination between TSV and landing pad, and TSV short-to-substrate. Simulations using HSPICE and ModelSim are carried out to validate fault detection and recovery. Results show that regular and redundant TSVs can be divided into groups to minimise area overhead without affecting the fault tolerance capability of the technique. Synthesis results using 130-nm design library show that 100% repair capability can be achieved with low area overhead (4% for the best case). The last contribution proposes a technique with joint consideration of temperature mitigation and fault tolerance without introducing additional redundant TSVs. This is achieved by reusing spare TSVs that are frequently deployed for improving yield and reliability in 3D ICs. The proposed technique consists of two steps: TSV determination step, which is for achieving optimal partition between regular and spare TSVs into groups; The second step is TSV placement, where temperature mitigation is targeted while optimizing total wirelength and routing difference. Simulation results show that using the proposed technique, 100% repair capability is achieved across all (five) benchmarks with an average temperature reduction of 75.2? (34.1%) (best case is 99.8? (58.5%)), while increasing wirelength by a small amount

    VLSI design of high-speed adders for digital signal processing applications.

    Get PDF

    A Two-Tiered Correlation of Dark Matter with Missing Transverse Energy: Reconstructing the Lightest Supersymmetric Particle Mass at the LHC

    Get PDF
    We suggest that non-trivial correlations between the dark matter particle mass and collider based probes of missing transverse energy H_T^miss may facilitate a two tiered approach to the initial discovery of supersymmetry and the subsequent reconstruction of the LSP mass at the LHC. These correlations are demonstrated via extensive Monte Carlo simulation of seventeen benchmark models, each sampled at five distinct LHC center-of-mass beam energies, spanning the parameter space of No-Scale F-SU(5).This construction is defined in turn by the union of the Flipped SU(5) Grand Unified Theory, two pairs of hypothetical TeV scale vector-like supersymmetric multiplets with origins in F-theory, and the dynamically established boundary conditions of No-Scale Supergravity. In addition, we consider a control sample comprised of a standard minimal Supergravity benchmark point. Led by a striking similarity between the H_T^miss distribution and the familiar power spectrum of a black body radiator at various temperatures, we implement a broad empirical fit of our simulation against a Poisson distribution ansatz. We advance the resulting fit as a theoretical blueprint for deducing the mass of the LSP, utilizing only the missing transverse energy in a statistical sampling of >= 9 jet events. Cumulative uncertainties central to the method subsist at a satisfactory 12-15% level. The fact that supersymmetric particle spectrum of No-Scale F-SU(5) has thrived the withering onslaught of early LHC data that is steadily decimating the Constrained Minimal Supersymmetric Standard Model and minimal Supergravity parameter spaces is a prime motivation for augmenting more conventional LSP search methodologies with the presently proposed alternative.Comment: JHEP version, 17 pages, 9 Figures, 2 Table

    An asynchronous low-power 80C51 microcontroller

    Get PDF

    Compiler-assisted multiple instruction rollback recovery using a read buffer

    Get PDF
    Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper describes compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. The compiler-assisted scheme presented consists of hardware that is less complex than shadow files, history files, history buffers, or delayed write buffers, while experimental evaluation indicates performance improvement over compiler-based schemes

    An innovative two-stage data compression scheme using adaptive block merging technique

    Get PDF
    Test data has increased enormously owing to the rising on-chip complexity of integrated circuits. It further increases the test data transportation time and tester memory. The non-correlated test bits increase the issue of the test power. This paper presents a two-stage block merging based test data minimization scheme which reduces the test bits, test time and test power. A test data is partitioned into blocks of fixed sizes which are compressed using two-stage encoding technique. In stage one, successive blocks are merged to retain a representative block. In stage two, the retained pattern block is further encoding based on the existence of ten different subcases between the sub-block formed by splitting the retained pattern block into two halves. Non-compatible blocks are also split into two sub-blocks and tried for encoded using lesser bits. Decompression architecture to retrieve the original test data is presented. Simulation results obtained corresponding to different ISCAS′89 benchmarks circuits reflect its effectiveness in achieving better compression

    Application of advanced technology to space automation

    Get PDF
    Automated operations in space provide the key to optimized mission design and data acquisition at minimum cost for the future. The results of this study strongly accentuate this statement and should provide further incentive for immediate development of specific automtion technology as defined herein. Essential automation technology requirements were identified for future programs. The study was undertaken to address the future role of automation in the space program, the potential benefits to be derived, and the technology efforts that should be directed toward obtaining these benefits

    Within-Die Delay Variation Measurement And Analysis For Emerging Technologies Using An Embedded Test Structure

    Get PDF
    Both random and systematic within-die process variations (PV) are growing more severe with shrinking geometries and increasing die size. Escalation in the variations in delay and power with reductions in feature size places higher demands on the accuracy of variation models. Their availability can be used to improve yield, and the corresponding profitability and product quality of the fabricated integrated circuits (ICs). Sources of within-die variations include optical source limitations, and layout-based systematic effects (pitch, line-width variability, and microscopic etch loading). Unfortunately, accurate models of within-die PVs are becoming more difficult to derive because of their increasingly sensitivity to design-context. Embedded test structures (ETS) continue to play an important role in the development of models of PVs and as a mechanism to improve correlations between hardware and models. Variations in path delays are increasing with scaling, and are increasingly affected by neighborhood\u27 interactions. In order to fully characterize within-die variations, delays must be measured in the context of actual core-logic macros. Doing so requires the use of an embedded test structure, as opposed to traditional scribe line test structures such as ring oscillators (RO). Accurate measurements of within-die variations can be used, e.g., to better tune models to actual hardware (model-to-hardware correlations). In this research project, I propose an embedded test structure called REBEL (Regional dELay BEhavior) that is designed to measure path delays in a minimally invasive fashion; and its architecture measures the path delays more accurately. Design for manufacture-ability (DFM) analysis is done on the on 90 nm ASIC chips and 28nm Zynq 7000 series FPGA boards. I present ASIC results on within-die path delay variations in a floating-point unit (FPU) fabricated in IBM\u27s 90 nm technology, with 5 pipeline stages, used as a test vehicle in chip experiments carried out at nine different temperature/voltage (TV) corners. Also experimental data has been analyzed for path delay variations in short vs long paths. FPGA results on within-die variation and die-to-die variations on Advanced Encryption System (AES) using single pipelined stage are also presented. Other analysis that have been performed on the calibrated path delays are Flip Flop propagation delays for both rising and falling edge (tpHL and tpLH), uncertainty analysis, path distribution analysis, short versus long path variations and mid-length path within-die variation. I also analyze the impact on delay when the chips are subjected to industrial-level temperature and voltage variations. From the experimental results, it has been established that the proposed REBEL provides capabilities similar to an off-chip logic analyzer, i.e., it is able to capture the temporal behavior of the signal over time, including any static and dynamic hazards that may occur on the tested path. The ASIC results further show that path delays are correlated to the launch-capture (LC) interval used to time them. Therefore, calibration as proposed in this work must be carried out in order to obtain an accurate analysis of within-die variations. Results on ASIC chips show that short paths can vary up to 35% on average, while long paths vary up to 20% at nominal temperature and voltage. A similar trend occurs for within-die variations of mid-length paths where magnitudes reduced to 20% and 5%, respectively. The magnitude of delay variations in both these analyses increase as temperature and voltage are changed to increase performance. The high level of within-die delay variations are undesirable from a design perspective, but they represent a rich source of entropy for applications that make use of \u27secrets\u27 such as authentication, hardware metering and encryption. Physical unclonable functions (PUFs) are a class of primitives that leverage within-die-variations as a means of generating random bit strings for these types of applications, including hardware security and trust. Zynq FPGAs Die-to-Die and within-die variation study shows that on average there is 5% of within-Die variation and the range of die-to-Die variation can go upto 3ns. The die-to-Die variations can be explored in much further detail to study the variations spatial dependance. Additionally, I also carried out research in the area data mining to cater for big data by focusing the work on decision tree classification (DTC) to speed-up the classification step in hardware implementation. For this purpose, I devised a pipelined architecture for the implementation of axis parallel binary decision tree classification for meeting up with the requirements of execution time and minimal resource usage in terms of area. The motivation for this work is that analyzing larger data-sets have created abundant opportunities for algorithmic and architectural developments, and data-mining innovations, thus creating a great demand for faster execution of these algorithms, leading towards improving execution time and resource utilization. Decision trees (DT) have since been implemented in software programs. Though, the software implementation of DTC is highly accurate, the execution times and the resource utilization still require improvement to meet the computational demands in the ever growing industry. On the other hand, hardware implementation of DT has not been thoroughly investigated or reported in detail. Therefore, I propose a hardware acceleration of pipelined architecture that incorporates the parallel approach in acquiring the data by having parallel engines working on different partitions of data independently. Also, each engine is processing the data in a pipelined fashion to utilize the resources more efficiently and reduce the time for processing all the data records/tuples. Experimental results show that our proposed hardware acceleration of classification algorithms has increased throughput, by reducing the number of clock cycles required to process the data and generate the results, and it requires minimal resources hence it is area efficient. This architecture also enables algorithms to scale with increasingly large and complex data sets. We developed the DTC algorithm in detail and explored techniques for adapting it to a hardware implementation successfully. This system is 3.5 times faster than the existing hardware implementation of classification.\u2

    The 1991 3rd NASA Symposium on VLSI Design

    Get PDF
    Papers from the symposium are presented from the following sessions: (1) featured presentations 1; (2) very large scale integration (VLSI) circuit design; (3) VLSI architecture 1; (4) featured presentations 2; (5) neural networks; (6) VLSI architectures 2; (7) featured presentations 3; (8) verification 1; (9) analog design; (10) verification 2; (11) design innovations 1; (12) asynchronous design; and (13) design innovations 2

    Aspects of Leptogenesis Scenarios at Grand Unification and Sub-TeV Scales and Their Possible Low-Energy Tests

    Get PDF
    In the present Thesis, we investigate various aspects of leptogenesis scenarios based on the type-I seesaw extension of the Standard Model (SM) with 2, 3 heavy Majorana neutrinos NjN_j with masses MjM_j, j=1,...,3j = 1,\,...,\,3, as well as the possibilities to test the scenarios considered by us in currently running and/or future planned low-energy experiments. We focus first on the high-scale leptogenesis framework with strongly hierarchical mass spectrum of the heavy Majorana neutrinos, namely M1M_1 << M2M_2 << M3M_3, with M1M_1 in the range (1081014)(10^{8}-10^{14}) GeV, concentrating on the possibility that the requisite CP-violation for the generation of the baryon asymmetry of the Universe ηB\eta_B is provided solely by the low-energy Dirac and/or Majorana phases of the light neutrino mixing (PMNS) matrix. A detailed numerical analysis of the solution to the quantum density matrix equations in this scenario, performed with the powerful ULYSSES code we have developed, reveals a number of novel features: i) ηB\eta_B going through zero and changing sign at the transitions between different flavour regimes (1-to-2 and 2-to-3) in the case of vanishing initial abundance of N1N_1 and strong wash-out effects; ii) inadequate description of the transitions between different flavour regimes by the corresponding Boltzmann equations; iii) flavour effects persisting beyond 101210^{12} GeV and making it possible to reproduce the observed value of ηB\eta_B at these high-scales even though the CP-violation is provided only by the Dirac and/or Majorana phases of the PMNS matrix. Considering the somewhat simpler case of just two heavy Majorana neutrinos N1,2N_{1,\,2} (with the heaviest N3N_3 decoupled) we show that relatively large part of the viable leptogenesis parameter space can be probed in low-energy neutrino experiments. We find, in particular, that, when the CP-violation is provided exclusively by the Dirac phase δ\delta of the PMNS matrix, there is a correlation between the sign of sinδ\sin\delta and the sign of ηB\eta_B. This opens up the possibility to test part of the parameter space of this scenario in low-energy experiments on CP-violation in neutrino oscillations. A measurement of the Dirac and/or Majorana phases would also constrain the range of scales for which one can have viable leptogenesis in the considered scenario. Next, we show that in the low-scale resonant leptogenesis scenario with two heavy Majorana neutrinos N1,2N_{1,\,2} forming a pseudo-Dirac pair, with MM1,2M\simeq M_{1,\,2} and a small mass splitting M2M1| M_2-M_1 | << MM, the observed ηB\eta_B can be reproduced for MM in the range (0.1100)(0.1\sim 100) GeV by relying only on the decay mechanism, either during the production ("freeze-in") or departure from equilibrium ("freeze-out") of N1,2N_{1,\,2}. In this context, the inclusion of flavour and thermal effects in the formalism of Boltzmann equations is crucial for predicting the observed value of ηB\eta_B. Also, we find that the viable parameter space of this resonant scenario is compatible with values of the heavy Majorana neutrino couplings to the SM that could be probed at future colliders, like at the discussed FCC-ee facility. When low-scale leptogenesis with three quasi-degenerate in mass heavy Majorana neutrinos N1,2,3N_{1,\,2,\,3} with MM1,2,3M\simeq M_{1,\,2,\,3} is considered in the formalism of density matrix equations and, in particular, with both the heavy Majorana neutrino oscillation and decay mechanisms taken into account, the viable parameter space for MM in the range (0.057×104)(0.05-7\times 10^4) GeV enlarges considerably and becomes accessible to direct searches at the LHC, as well as in fixed target experiments and future colliders. We demonstrate that planned and upcoming experiments on charged lepton flavour violating processes with muons μ±\mu^\pm, specifically MEG II on μeγ\mu\to e\gamma decay, Mu3e on μeee\mu \to eee decay, Mu2e and COMET on μe\mu - e conversion in aluminium and PRISM/PRIME on μe\mu - e conversion in titanium, can test significant region of the viable leptogenesis parameter space and may potentially establish the first hint of such low-scale leptogenesis scenario
    corecore