73,762 research outputs found
Voltage noise analysis with ring oscillator clocks
Voltage noise is the main source of dynamic variability in integrated circuits and a major concern for the design of Power Delivery Networks (PDNs). Ring Oscillators Clocks (ROCs) have been proposed as an alternative to mitigate the negative effects of voltage noise as technology scales down and power density increases. However, their effectiveness highly depends on the design parameters of the PDN, power consumption patterns of the system and spatial locality of the ROCs within the clock domains. This paper analyzes the impact of the PDN parameters and ROC location on the robustness to voltage noise. The capability of reacting instantaneously to unpredictable voltage droops makes ROCs an attractive solution, which allows to reduce the amount of decoupling capacitance without downgrading performance. Tolerance to voltage noise and related benefits can be increased by using multiple ROCs and reducing the size of the clock domains. The analysis shows that up to 83% of the margins for voltage noise and up to 27% of the leakage power can be reduced by using local ROCs.Peer ReviewedPostprint (author's final draft
Ring oscillator clocks and margins
How much margin do we have to add to the delay lines of a bundled-data circuit? This paper is an attempt to give a methodical answer to this question, taking into account all sources of variability and the existing EDA machinery for timing analysis and sign-off. The paper is based on the study of the margins of a ring oscillator that substitutes a PLL as clock generator. A timing model is proposed that shows that a 12% margin for delay lines can be sufficient to cover variability in a 65nm technology. In a typical scenario, performance and energy improvements between 15% and 35% can be obtained by using a ring oscillator instead of a PLL. The paper concludes that a synchronous circuit with a ring oscillator clock shows similar benefits in performance and energy as those of bundled-data asynchronous circuits.Peer ReviewedPostprint (author's final draft
An Experimental Study of Reduced-Voltage Operation in Modern FPGAs for Neural Network Acceleration
We empirically evaluate an undervolting technique, i.e., underscaling the
circuit supply voltage below the nominal level, to improve the power-efficiency
of Convolutional Neural Network (CNN) accelerators mapped to Field Programmable
Gate Arrays (FPGAs). Undervolting below a safe voltage level can lead to timing
faults due to excessive circuit latency increase. We evaluate the
reliability-power trade-off for such accelerators. Specifically, we
experimentally study the reduced-voltage operation of multiple components of
real FPGAs, characterize the corresponding reliability behavior of CNN
accelerators, propose techniques to minimize the drawbacks of reduced-voltage
operation, and combine undervolting with architectural CNN optimization
techniques, i.e., quantization and pruning. We investigate the effect of
environmental temperature on the reliability-power trade-off of such
accelerators. We perform experiments on three identical samples of modern
Xilinx ZCU102 FPGA platforms with five state-of-the-art image classification
CNN benchmarks. This approach allows us to study the effects of our
undervolting technique for both software and hardware variability. We achieve
more than 3X power-efficiency (GOPs/W) gain via undervolting. 2.6X of this gain
is the result of eliminating the voltage guardband region, i.e., the safe
voltage region below the nominal level that is set by FPGA vendor to ensure
correct functionality in worst-case environmental and circuit conditions. 43%
of the power-efficiency gain is due to further undervolting below the
guardband, which comes at the cost of accuracy loss in the CNN accelerator. We
evaluate an effective frequency underscaling technique that prevents this
accuracy loss, and find that it reduces the power-efficiency gain from 43% to
25%.Comment: To appear at the DSN 2020 conferenc
Recommended from our members
Noise shaping Asynchronous SAR ADC based time to digital converter
Time-to-digital converters (TDCs) are key elements for the digitization of timing information in modern mixed-signal circuits such as digital PLLs, DLLs, ADCs, and on-chip jitter-monitoring circuits. Especially, high-resolution TDCs are increasingly employed in on-chip timing tests, such as jitter and clock skew measurements, as advanced fabrication technologies allow fine on-chip time resolutions. Its main purpose is to quantize the time interval of a pulse signal or the time interval between the rising edges of two clock signals. Similarly to ADCs, the performance of TDCs are also primarily characterized by Resolution, Sampling Rate, FOM, SNDR, Dynamic Range and DNL/INL. This work proposes and demonstrates 2nd order noise shaping Asynchronous SAR ADC based TDC architecture with highest resolution of 0.25 ps among current state of art designs with respect to post-layout simulation results. This circuit is a combination of low power/High Resolution 2nd Order Noise Shaped Asynchronous SAR ADC backend with simple Time to Amplitude converter (TAC) front-end and is implemented in 40nm CMOS technology. Additionally, special emphasis is given on the discussion on various current state of art TDC architectures.Electrical and Computer Engineerin
Power Side Channels in Security ICs: Hardware Countermeasures
Power side-channel attacks are a very effective cryptanalysis technique that
can infer secret keys of security ICs by monitoring the power consumption.
Since the emergence of practical attacks in the late 90s, they have been a
major threat to many cryptographic-equipped devices including smart cards,
encrypted FPGA designs, and mobile phones. Designers and manufacturers of
cryptographic devices have in response developed various countermeasures for
protection. Attacking methods have also evolved to counteract resistant
implementations. This paper reviews foundational power analysis attack
techniques and examines a variety of hardware design mitigations. The aim is to
highlight exposed vulnerabilities in hardware-based countermeasures for future
more secure implementations
Dynamic Voltage Scaling Aware Delay Fault Testing
The application of Dynamic Voltage Scaling (DVS) to reduce energy consumption may have a detrimental impact on the quality of manufacturing tests employed to detect permanent faults. This paper analyses the influence of different voltage/frequency settings on fault detection within a DVS application. In particular, the effect of supply voltage on different types of delay faults is considered. This paper presents a study of these problems with simulation results. We have demonstrated that the test application time increases as we reduce the test voltage. We have also shown that for newer technologies we do not have to go to very low voltage levels for delay fault testing. We conclude that it is necessary to test at more than one operating voltage and that the lowest operating voltage does not necessarily give the best fault cover
- …