679 research outputs found
High Performance and Optimal Configuration of Accurate Heterogeneous Block-Based Approximate Adder
Approximate computing is an emerging paradigm to improve power and
performance efficiency for error-resilient application. Recent approximate
adders have significantly extended the design space of accuracy-power
configurable approximate adders, and find optimal designs by exploring the
design space. In this paper, a new energy-efficient heterogeneous block-based
approximate adder (HBBA) is proposed; which is a generic/configurable model
that can be transformed to a particular adder by defining some configurations.
An HBBA, in general, is composed of heterogeneous sub-adders, where each
sub-adder can have a different configuration. A set of configurations of all
the sub-adders in an HBBA defines its configuration. The block-based adders are
approximated through inexact logic configuration and truncated carry chains.
HBBA increases design space providing additional design points that fall on the
Pareto-front and offer better power-accuracy trade-off compared to other
configurations. Furthermore, to avoid Mont-Carlo simulations, we propose an
analytical modelling technique to evaluate the probability of error and
Probability Mass Function (PMF) of error value. Moreover, the estimation method
estimates delay, area and power of heterogeneous block-based approximate
adders. Thus, based on the analytical model and estimation method, the optimal
configuration under a given error constraint can be selected from the whole
design space of the proposed adder model by exhaustive search. The simulation
results show that our HBBA provides improved accuracy in terms of error metrics
compared to some state-of-the-art approximate adders. HBBA with 32 bits length
serves about 15% reduction in area and up to 17% reduction in energy compared
to state-of-the-art approximate adders.Comment: Submitted to the IEEE-TCAD journal, 16 pages, 16 figure
X-Rel: Energy-Efficient and Low-Overhead Approximate Reliability Framework for Error-Tolerant Applications Deployed in Critical Systems
Triple Modular Redundancy (TMR) is one of the most common techniques in
fault-tolerant systems, in which the output is determined by a majority voter.
However, the design diversity of replicated modules and/or soft errors that are
more likely to happen in the nanoscale era may affect the majority voting
scheme. Besides, the significant overheads of the TMR scheme may limit its
usage in energy consumption and area-constrained critical systems. However, for
most inherently error-resilient applications such as image processing and
vision deployed in critical systems (like autonomous vehicles and robotics),
achieving a given level of reliability has more priority than precise results.
Therefore, these applications can benefit from the approximate computing
paradigm to achieve higher energy efficiency and a lower area. This paper
proposes an energy-efficient approximate reliability (X-Rel) framework to
overcome the aforementioned challenges of the TMR systems and get the full
potential of approximate computing without sacrificing the desired reliability
constraint and output quality. The X-Rel framework relies on relaxing the
precision of the voter based on a systematical error bounding method that
leverages user-defined quality and reliability constraints. Afterward, the size
of the achieved voter is used to approximate the TMR modules such that the
overall area and energy consumption are minimized. The effectiveness of
employing the proposed X-Rel technique in a TMR structure, for different
quality constraints as well as with various reliability bounds are evaluated in
a 15-nm FinFET technology. The results of the X-Rel voter show delay, area, and
energy consumption reductions of up to 86%, 87%, and 98%, respectively, when
compared to those of the state-of-the-art approximate TMR voters.Comment: This paper has been published in IEEE Transactions on Very Large
Scale Integration (VLSI) System
Performance Enhancement of Power System Operation and Planning through Advanced Advisory Mechanisms
abstract: This research develops decision support mechanisms for power system operation and planning practices. Contemporary industry practices rely on deterministic approaches to approximate system conditions and handle growing uncertainties from renewable resources. The primary purpose of this research is to identify soft spots of the contemporary industry practices and propose innovative algorithms, methodologies, and tools to improve economics and reliability in power systems.
First, this dissertation focuses on transmission thermal constraint relaxation practices. Most system operators employ constraint relaxation practices, which allow certain constraints to be relaxed for penalty prices, in their market models. A proper selection of penalty prices is imperative due to the influence that penalty prices have on generation scheduling and market settlements. However, penalty prices are primarily decided today based on stakeholder negotiations or system operator’s judgments. There is little to no methodology or engineered approach around the determination of these penalty prices. This work proposes new methods that determine the penalty prices for thermal constraint relaxations based on the impact overloading can have on the residual life of the line. This study evaluates the effectiveness of the proposed methods in the short-term operational planning and long-term transmission expansion planning studies.
The second part of this dissertation investigates an advanced methodology to handle uncertainties associated with high penetration of renewable resources, which poses new challenges to power system reliability and calls attention to include stochastic modeling within resource scheduling applications. However, the inclusion of stochastic modeling within mathematical programs has been a challenge due to computational complexities. Moreover, market design issues due to the stochastic market environment make it more challenging. Given the importance of reliable and affordable electric power, such a challenge to advance existing deterministic resource scheduling applications is critical. This ongoing and joint research attempts to overcome these hurdles by developing a stochastic look-ahead commitment tool, which is a stand-alone advisory tool. This dissertation contributes to the derivation of a mathematical formulation for the extensive form two-stage stochastic programming model, the utilization of Progressive Hedging decomposition algorithm, and the initial implementation of the Progressive Hedging subproblem along with various heuristic strategies to enhance the computational performance.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
Practical Techniques for Improving Performance and Evaluating Security on Circuit Designs
As the modern semiconductor technology approaches to nanometer era, integrated circuits (ICs) are facing more and more challenges in meeting performance demand and security. With the expansion of markets in mobile and consumer electronics, the increasing demands require much faster delivery of reliable and secure IC products. In order to improve the performance and evaluate the security of emerging circuits, we present three practical techniques on approximate computing, split manufacturing and analog layout automation. Approximate computing is a promising approach for low-power IC design. Although a few accuracy-configurable adder (ACA) designs have been developed in the past, these designs tend to incur large area overheads as they rely on either redundant computing or complicated carry prediction. We investigate a simple ACA design that contains no redundancy or error detection/correction circuitry and uses very simple carry prediction. The simulation results show that our design dominates the latest previous work on accuracy-delay-power tradeoff while using 39% less area. One variant of this design provides finer-grained and larger tunability than that of the previous works. Moreover, we propose a delay-adaptive self-configuration technique to further improve the accuracy-delay-power tradeoff. Split manufacturing prevents attacks from an untrusted foundry. The untrusted foundry has front-end-of-line (FEOL) layout and the original circuit netlist and attempts to identify critical components on the layout for Trojan insertion. Although defense methods for this scenario have been developed, the corresponding attack technique is not well explored. Hence, the defense methods are mostly evaluated with the k-security metric without actual attacks. We develop a new attack technique based on structural pattern matching. Experimental comparison with existing attack shows that the new attack technique achieves about the same success rate with much faster speed for cases without the k-security defense, and has a much better success rate at the same runtime for cases with the k-security defense. The results offer an alternative and practical interpretation for k-security in split manufacturing.
Analog layout automation is still far behind its digital counterpart. We develop the layout automation framework for analog/mixed-signal ICs. A hierarchical layout synthesis flow which works in bottom-up manner is presented. To ensure the qualified layouts for better circuit performance, we use the constraint-driven placement and routing methodology which employs the expert knowledge via design constraints. The constraint-driven placement uses simulated annealing process to find the optimal solution. The packing represented by sequence pairs and constraint graphs can simultaneously handle different kinds of placement constraints. The constraint-driven routing consists of two stages, integer linear programming (ILP) based global routing and sequential detailed routing. The experiment results demonstrate that our flow can handle complicated hierarchical designs with multiple design constraints. Furthermore, the placement performance can be further improved by using mixed-size block placement which works on large blocks in priority
Recommended from our members
Modeling and synthesis of approximate digital circuits
textEnergy minimization has become an ever more important concern in the design of very large scale integrated circuits (VLSI). In recent years, approximate computing, which is based on the idea of trading off computational accuracy for improved energy efficiency, has attracted significant attention. Applications that are both compute-intensive and error-tolerant are most suitable to adopt approximation strategies. This includes digital signal processing, data mining, machine learning or search algorithms. Such approximations can be achieved at several design levels, ranging from software, algorithm and architecture, down to logic or transistor levels. This dissertation investigates two research threads for the derivation of approximate digital circuits at the logic level: 1) modeling and synthesis of fundamental arithmetic building blocks; 2) automated techniques for synthesizing arbitrary approximate logic circuits under general error specifications. The first thread investigates elementary arithmetic blocks, such as adders and multipliers, which are at the core of all data processing and often consume most of the energy in a circuit. An optimal strategy is developed to reduce energy consumption in timing-starved adders under voltage over-scaling. This allows a formal demonstration that, under quadratic error measures prevalent in signal processing applications, an adder design strategy that separates the most significant bits (MSBs) from the least significant bits (LSBs) is optimal. An optimal conditional bounding (CB) logic is further proposed for the LSBs, which selectively compensates for the occurrence of errors in the MSB part. There is a rich design space of optimal adders defined by different CB solutions. The other thread considers the problem of approximate logic synthesis (ALS) in two-level form. ALS is concerned with formally synthesizing a minimum-cost approximate Boolean function, whose behavior deviates from a specified exact Boolean function in a well-constrained manner. It is established that the ALS problem un-constrained by the frequency of errors is isomorphic to a Boolean relation (BR) minimization problem, and hence can be efficiently solved by existing BR minimizers. An efficient heuristic is further developed which iteratively refines the magnitude-constrained solution to arrive at a two-level representation also satisfying error frequency constraints. To extend the two-level solution into an approach for multi-level approximate logic synthesis (MALS), Boolean network simplifications allowed by external don't cares (EXDCs) are used. The key contribution is in finding non-trivial EXDCs that can maximally approach the external BR and, when applied to the Boolean network, solve the MALS problem constrained by magnitude only. The algorithm then ensures compliance to error frequency constraints by recovering the correct outputs on the sought number of error-producing inputs while aiming to minimize the network cost increase. Experiments have demonstrated the effectiveness of the proposed techniques in deriving approximate circuits. The approximate adders can save up to 60% energy compared to exact adders for a reasonable accuracy. When used in larger systems implementing image-processing algorithms, energy savings of 40% are possible. The logic synthesis approaches generally can produce approximate Boolean functions or networks with complexity reductions ranging from 30% to 50% under small error constraints.Electrical and Computer Engineerin
Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques
The rapid growth of demanding applications in domains applying multimedia
processing and machine learning has marked a new era for edge and cloud
computing. These applications involve massive data and compute-intensive tasks,
and thus, typical computing paradigms in embedded systems and data centers are
stressed to meet the worldwide demand for high performance. Concurrently, the
landscape of the semiconductor field in the last 15 years has constituted power
as a first-class design concern. As a result, the community of computing
systems is forced to find alternative design approaches to facilitate
high-performance and/or power-efficient computing. Among the examined
solutions, Approximate Computing has attracted an ever-increasing interest,
with research works applying approximations across the entire traditional
computing stack, i.e., at software, hardware, and architectural levels. Over
the last decade, there is a plethora of approximation techniques in software
(programs, frameworks, compilers, runtimes, languages), hardware (circuits,
accelerators), and architectures (processors, memories). The current article is
Part I of our comprehensive survey on Approximate Computing, and it reviews its
motivation, terminology and principles, as well it classifies and presents the
technical details of the state-of-the-art software and hardware approximation
techniques.Comment: Under Review at ACM Computing Survey
- …