23 research outputs found
Presentation of a fault tolerance algorithm for design of quantum-dot cellular automata circuits
A novel algorithm for working out the Kink energy of quantum-dot cellular automata (QCA) circuits and their fault tolerability is introduced. In this algorithm at first with determining the input values on a specified design, the calculation between cells makes use of Kink physical relations will be managed. Therefore, the polarization of any cell and consequently output cell will be set. Then by determining missed cell(s) on the discussed circuit, the polarization of output cell will be obtained and by comparing it with safe state or software simulation, its fault tolerability will be proved. The proposed algorithm was implemented on a novel and advance fault tolerance full adder whose performance has been demonstrated. This algorithm could be implemented on any QCA circuit. Noticeably higher speed of the algorithm than simulation and traditional manual methods, expandability of this algorithm for variable circuits, beyond of four-dot square of QCA circuits, and the investigation of several damaged cells instead just one and special cell are the advantages of algorithmic action
Design of variation-tolerant synchronizers for multiple clock and voltage domains
PhD ThesisParametric variability increasingly affects the performance of electronic circuits as
the fabrication technology has reached the level of 32nm and beyond. These
parameters may include transistor Process parameters (such as threshold
voltage), supply Voltage and Temperature (PVT), all of which could have a
significant impact on the speed and power consumption of the circuit, particularly
if the variations exceed the design margins. As systems are designed with more
asynchronous protocols, there is a need for highly robust synchronizers and
arbiters. These components are often used as interfaces between communication
links of different timing domains as well as sampling devices for asynchronous
inputs coming from external components. These applications have created a need
for new robust designs of synchronizers and arbiters that can tolerate process,
voltage and temperature variations.
The aim of this study was to investigate how synchronizers and arbiters should be
designed to tolerate parametric variations. All investigations focused mainly on
circuit-level and transistor level designs and were modeled and simulated in the
UMC90nm CMOS technology process. Analog simulations were used to measure
timing parameters and power consumption along with a “Monte Carlo” statistical
analysis to account for process variations.
Two main components of synchronizers and arbiters were primarily investigated:
flip-flop and mutual-exclusion element (MUTEX). Both components can violate the
input timing conditions, setup and hold window times, which could cause
metastability inside their bistable elements and possibly end in failures. The
mean-time between failures is an important reliability feature of any synchronizer
delay through the synchronizer.
The MUTEX study focused on the classical circuit, in addition to a number of
tolerance, based on increasing internal gain by adding current sources, reducing
the capacitive loading, boosting the transconductance of the latch, compensating
the existing Miller capacitance, and adding asymmetry to maneuver the metastable
point. The results showed that some circuits had little or almost no improvements,
while five techniques showed significant improvements by reducing Ď„ and
maintaining high tolerance.
Three design approaches are proposed to provide variation-tolerant
synchronizers. wagging synchronizer proposed to First, the is significantly
increase reliability over that of the conventional two flip-flop synchronizer. The
robustness of the wagging technique can be enhanced by using robust Ď„ latches or
adding one more cycle of synchronization. The second approach is the
Metastability Auto-Detection and Correction (MADAC) latch which relies on swiftly
detecting a metastable event and correcting it by enforcing the previously stored
logic value. This technique significantly reduces the resolution time down from
uncertain
synchronization technique is proposed to transfer signals between Multiple-
Voltage Multiple-Clock Domains (MVD/MCD) that do not require conventional
level-shifters between the domains or multiple power supplies within each
domain. This interface circuit uses a synchronous set and feedback reset protocol
which provides level-shifting and synchronization of all signals between the
domains, from a wide range of voltage-supplies and clock frequencies.
Overall, synchronizer circuits can tolerate variations to a greater extent by
employing the wagging technique or using a MADAC latch, while MUTEX tolerance
can suffice with small circuit modifications. Communication between MVD/MCD
can be achieved by an asynchronous handshake
without a need for adding level-shifters.The Saudi Arabian Embassy in London,
Umm Al-Qura University, Saudi Arabi
Recommended from our members
Applications Of Physical Unclonable Functions on ASICS and FPGAs
With the ever-increasing demand for security in embedded systems and wireless sensor networks, we require integrating security primitives for authentication in these devices. One such primitive is known as a Physically Unclonable Function. This entity can be used to provide security at a low cost, as the key or digital signature can be generated by dedicating a small part of the silicon die to these primitives which produces a fingerprint unique to each device. This fingerprint produced by a PUF is called its response. The response of PUFs depends upon the process variation that occurs during the manufacturing process. In embedded systems and especially wireless sensor networks, there is a need to secure the data the collected from the sensors.
To tackle this problem, we propose the use of SRAM-based PUFs to detect the temperature of the system. This is done by taking the PUF response to generate temperature based keys. The key would act as proofs of the temperature of the system. In SRAM PUFs, it is experimentally determined that at varying temperatures there is a shift in the response of the cells from zero to one and vice-versa. This variation can be exploited to generate random but repeatable keys at different temperatures.
To evaluate our approach, we first analyze the key metrics of a PUF, namely, reliability and uniqueness. In order to test the idea of using the PUF as a temperature based key generator, we collect data from a total of ten SRAM chips at fixed temperatures steps. We first calculate the reliability, which is related to bit error rate, an important parameter with respect to error correction, at various temperatures to verify the stability of the responses. We then identify the temperature of the system by using a temperature sensor and then encode the key offset by PUF response at that temperature using BCH codes. This key-temperature pair can then be used to establish secure communication between the nodes. Thus, this scheme helps in establishing secure keys as the generation has an extra variable to produce confusion.
We developed a novel PUF for Xilinx FPGAs and evaluated its quality metrics. It is very compact and has high uniqueness and reliability. We also implement 2 different PUF configurations to allow per-device selection of best PUFs to reduce the area and power required for key-generation. We also evaluate the temperature response of this PUF and show improvement in the response by using per-device selection
Fault-tolerant Algorithms for Tick-Generation in Asynchronous Logic: Robust Pulse Generation
Today's hardware technology presents a new challenge in designing robust
systems. Deep submicron VLSI technology introduced transient and permanent
faults that were never considered in low-level system designs in the past.
Still, robustness of that part of the system is crucial and needs to be
guaranteed for any successful product. Distributed systems, on the other hand,
have been dealing with similar issues for decades. However, neither the basic
abstractions nor the complexity of contemporary fault-tolerant distributed
algorithms match the peculiarities of hardware implementations. This paper is
intended to be part of an attempt striving to overcome this gap between theory
and practice for the clock synchronization problem. Solving this task
sufficiently well will allow to build a very robust high-precision clocking
system for hardware designs like systems-on-chips in critical applications. As
our first building block, we describe and prove correct a novel Byzantine
fault-tolerant self-stabilizing pulse synchronization protocol, which can be
implemented using standard asynchronous digital logic. Despite the strict
limitations introduced by hardware designs, it offers optimal resilience and
smaller complexity than all existing protocols.Comment: 52 pages, 7 figures, extended abstract published at SSS 201
Study of Single-Event Transient Effects on Analog Circuits
Radiation in space is potentially hazardous to microelectronic circuits and systems such as spacecraft electronics. Transient effects on circuits and systems from high energetic particles can interrupt electronics operation or crash the systems. This phenomenon is particularly serious in complementary metal-oxide-semiconductor (CMOS) integrated circuits (ICs) since most of modern ICs are implemented with CMOS technologies. The problem is getting worse with the technology scaling down. Radiation-hardening-by-design (RHBD) is a popular method to build CMOS devices and systems meeting performance criteria in radiation environment.
Single-event transient (SET) effects in digital circuits have been studied extensively in the radiation effect community. In recent years analog RHBD has been received increasing attention since analog circuits start showing the vulnerability to the SETs due to the dramatic process scaling. Analog RHBD is still in the research stage. This study is to further study the effects of SET on analog CMOS circuits and introduces cost-effective RHBD approaches to mitigate these effects.
The analog circuits concerned in this study include operational amplifiers (op amps), comparators, voltage-controlled oscillators (VCOs), and phase-locked loops (PLLs). Op amp is used to study SET effects on signal amplitude while the comparator, the VCO, and the PLL are used to study SET effects on signal state during transition time. In this work, approaches based on multi-level from transistor, circuit, to system are presented to mitigate the SET effects on the aforementioned circuits. Specifically, RHBD approach based on the circuit level, such as the op amp, adapts the auto-zeroing cancellation technique. The RHBD comparator implemented with dual-well and triple-well is studied and compared at the transistor level. SET effects are mitigated in a LC-tank oscillator by inserting a decoupling resistor. The RHBD PLL is implemented on the system level using triple modular redundancy (TMR) approach. It demonstrates that RHBD at multi-level can be cost-effective to mitigate the SEEs in analog circuits. In addition, SETs detection approaches are provided in this dissertation so that various mitigation approaches can be implemented more effectively. Performances and effectiveness of the proposed RHBD are validated through SPICE simulations on the schematic and pulsed-laser experiments on the fabricated circuits. The proposed and tested RHBD techniques can be applied to other relevant analog circuits in the industry to achieve radiation-tolerance
Low phase noise 2 GHz Fractional-N CMOS synthesizer IC
Low noise low division 2 GHz RF synthesizer integrated circuits (ICs) are conventionally implemented in some form of HBT process such as SiGe or GaAs. The research in this dissertation differs from convention, with the aim of implementing a synthesizer IC in a more convenient, low-cost Si-based CMOS process. A collection of techniques to push towards the noise and frequency limits of CMOS processes, and possibly other IC processes, is then one of the research outcomes. In a synthesizer low N-divider ratios are important, as high division ratios would amplify in-band phase noise. The design methods deployed as part of this research achieve low division ratios (4 ≤ N ≤ 33) and a high phase comparison frequency (>100 MHz). The synthesizer IC employs a first-order fractional-N topology to achieve increased frequency tuning resolution. The primary N-divider was implemented utilising current mode logic (CML) and the fractional accumulator utilising conventional CMOS. Both a conventional CMOS phase frequency detector (PFD) and a CML PFD were implemented for benchmarking purposes. A custom-built 4.4 GHz synthesizer circuit employing the IC was used to validate the research. In the 4.4 GHz synthesizer circuit, the prototype IC achieved a measured in-band phase noise plateau of L( f ) = -113 dBc/Hz at a 100 kHz frequency offset, which equates to a figure of merit (FOM) of -225 dBc/Hz. The FOM compares well with existing, but expensive, SiGe and GaAs HBT processes. Total IC power dissipation was 710 mW, which is considerably less than commercially available GaAs designs. The complete synthesizer IC was implemented in Austriamicrosystems‟ (AMS) 0.35 μm CMOS process and occupies an area of 3.15 x 2.18 mm2.Dissertation (MEng)--University of Pretoria, 2010.Electrical, Electronic and Computer Engineeringunrestricte
Development, Optimisation and Characterisation of a Radiation Hard Mixed-Signal Readout Chip for LHCb
The Beetle chip is a radiation hard, 128 channel pipelined readout chip for silicon strip detectors. The front-end consists of a charge-sensitive preamplifier followed by a CR-RC pulse shaper. The analogue pipeline memory is implemented as a switched capacitor array with a maximum latency of 4us. The 128 analogue channels are multiplexed and transmitted off chip in 900ns via four current output drivers. Beside the pipelined readout path, the Beetle provides a fast discrimination of the front-end pulse. Within this doctoral thesis parts of the radiation hard Beetle readout chip for the LHCb experiment have been developed. The overall chip performances like noise, power consumption, input charge rates have been optimised as well as the elimination of failures so that the Beetle fulfils the requirements of the experiment. Furthermore the characterisation of the chip was a major part of this thesis. Beside the detailed measurement of the chip performance, several irradiation tests and an Single Event Upset (SEU) test were performed. A long-time measurement with a silicon strip detector was also part of this work as well as the development and test of a first mass production test setup. The Beetle chip showed no functional failure and only slight degradation in the analogue performance under irradiation of up to 130Mrad total dose. The Beetle chip fulfils all requirements of the vertex detector (VELO), the trigger tracker (TT) and the inner tracker (IT) and is ready for the start of LHCb end of 2007
Recommended from our members
Intrinsic Functions for Securing CMOS Computation: Variability, Modeling and Noise Sensitivity
A basic premise behind modern secure computation is the demand for lightweight cryptographic primitives, like identifier or key generator. From a circuit perspective, the development of cryptographic modules has also been driven by the aggressive scalability of complementary metal-oxide-semiconductor (CMOS) technology. While advancing into nano-meter regime, one significant characteristic of today\u27s CMOS design is the random nature of process variability, which limits the nominal circuit design. With the continuous scaling of CMOS technology, instead of mitigating the physical variability, leveraging such properties becomes a promising way. One of the famous products adhering to this double-edged sword philosophy is the Physically Unclonable Functions (PUFs), which extract secret keys from uncontrollable manufacturing variability on integrated circuits (ICs). However, since PUFs take advantage of microscopic process variations, thus many specialized issues including variability, modeling attacks and noise sensitivity need to be considered and addressed.
In this dissertation, we present our recent work on PUF based secure computation from three aspects: variability, modeling and noise sensitivity, which are deemed the foundations of our study. Moreover, we found that the three factors coordinate with each other in our study, for example, the modeling technique can be utilized to improve the unsatisfied reliability caused by noise sensitivity, quantifying the variability can effectively eliminate the impact from noise, and modeling can help with characterizing the physical variability precisely