5 research outputs found

    Hybrid CMOS-STTRAM Non-Volatile FPGA: Design Challenges and Optimization Approaches

    Get PDF
    Abstract-Research efforts to develop a novel memory technology that combines the desired traits of non-volatility, high endurance, high speed and low power have resulted in the emergence of Spin Torque Transfer-RAM (STTRAM) as a promising next generation universal memory. However, the prospect of developing a non-volatile FPGA framework with STTRAM exploiting its high integration density remains largely unexplored. In this paper, we propose a novel CMOS-STTRAM hybrid FPGA framework; identify the key design challenges; and propose optimization techniques at circuit, architecture and application mapping levels. Simulation results show that a STTRAM based optimized FPGA framework achieves an average improvement of 48.38% in area, 22.28% in delay and 16.1% in dynamic power for ISCAS benchmark circuits over a conventional CMOS based FPGA design

    A novel synthesis approach for active leakage power reduction using dynamic supply gating

    Full text link

    A novel synthesis approach for active leakage power reduction using dynamic supply gating

    No full text
    Due to exponential increase in subthreshold leakage with technology scaling and temperature increase, leakage power is becoming a major fraction of total power in the active mode. We present a novel lowcost design methodology with associated synthesis flow for reducing both switching and active leakage power using dynamic supply gating. A logic synthesis approach based on Shannon expansion is proposed that dynamically applies supply gating to idle parts of general logic circuits even when they are performing useful computation. Experimental results on a set of MCNC benchmark circuits in a predictive 70nm process exhibits improvements of 15 % to 88 % in total active power compared to the results obtained by a conventional optimization flow

    Hardware Acceleration of Electronic Design Automation Algorithms

    Get PDF
    With the advances in very large scale integration (VLSI) technology, hardware is going parallel. Software, which was traditionally designed to execute on single core microprocessors, now faces the tough challenge of taking advantage of this parallelism, made available by the scaling of hardware. The work presented in this dissertation studies the acceleration of electronic design automation (EDA) software on several hardware platforms such as custom integrated circuits (ICs), field programmable gate arrays (FPGAs) and graphics processors. This dissertation concentrates on a subset of EDA algorithms which are heavily used in the VLSI design flow, and also have varying degrees of inherent parallelism in them. In particular, Boolean satisfiability, Monte Carlo based statistical static timing analysis, circuit simulation, fault simulation and fault table generation are explored. The architectural and performance tradeoffs of implementing the above applications on these alternative platforms (in comparison to their implementation on a single core microprocessor) are studied. In addition, this dissertation also presents an automated approach to accelerate uniprocessor code using a graphics processing unit (GPU). The key idea is to partition the software application into kernels in an automated fashion, such that multiple instances of these kernels, when executed in parallel on the GPU, can maximally benefit from the GPU?s hardware resources. The work presented in this dissertation demonstrates that several EDA algorithms can be successfully rearchitected to maximally harness their performance on alternative platforms such as custom designed ICs, FPGAs and graphic processors, and obtain speedups upto 800X. The approaches in this dissertation collectively aim to contribute towards enabling the computer aided design (CAD) community to accelerate EDA algorithms on arbitrary hardware platforms
    corecore