8 research outputs found
Multiple bit error correcting architectures over finite fields
This thesis proposes techniques to mitigate multiple bit errors in GF arithmetic circuits. As GF arithmetic circuits such as multipliers constitute the complex and important functional unit of a crypto-processor, making them fault tolerant will improve the reliability of circuits that are employed in safety applications and the errors may cause catastrophe if not mitigated.
Firstly, a thorough literature review has been carried out. The merits of efficient schemes are carefully analyzed to study the space for improvement in error correction, area and power consumption.
Proposed error correction schemes include bit parallel ones using optimized BCH codes that are useful in applications where power and area are not prime concerns. The scheme is also extended to dynamically correcting scheme to reduce decoder delay. Other method that suits low power and area applications such as RFIDs and smart cards using cross parity codes is also proposed. The experimental evaluation shows that the proposed techniques can mitigate single and multiple bit errors with wider
error coverage compared to existing methods with lesser area and power consumption. The proposed scheme is used to mask the errors appearing at the output of the circuit irrespective of their cause.
This thesis also investigates the error mitigation schemes in emerging technologies (QCA, CNTFET) to compare area, power and delay with existing CMOS equivalent. Though the proposed novel multiple error correcting techniques can not ensure 100% error mitigation, inclusion of these techniques
to actual design can improve the reliability of the circuits or increase the difficulty in hacking crypto-devices. Proposed schemes can also be extended to non GF digital circuits
Recommended from our members
A Performance-Efficient and Practical Processor Error Recovery Framework
Continued reduction in the size of a transistor has affected the reliability of pro-
cessors built using them. This is primarily due to factors such as inaccuracies while
manufacturing, as well as non-ideal operating conditions, causing transistors to slow
down consistently, eventually leading to permanent breakdown and erroneous operation
of the processor. Permanent transistor breakdown, or faults, can occur at any point in
time in the processor’s lifetime. Errors are the discrepancies in the output of faulty
circuits. This dissertation shows that the components containing faults can continue
operating if the errors caused by them are within certain bounds. Further, the lifetime
of a processor can be increased by adding supportive structures that start working
once the processor develops these hard errors.
This dissertation has three major contributions, namely REPAIR, FaultSim and
PreFix. REPAIR is a fault tolerant system with minimal changes to the processor
design. It uses an external Instruction Re-execution Unit (IRU) to perform operations,
which the faulty processor might have erroneously executed. Instructions that are
found to use faulty hardware are then re-executed on the IRU. REPAIR shows that
the performance overhead of such targeted re-execution is low for a limited number of
faults.
FaultSim is a fast fault-simulator capable of simulating large circuits at the transistor
level. It is developed in this dissertation to understand the effect of faults on different
circuits. It performs digital logic based simulations, trading off analogue accuracy with
speed, while still being able to support most fault models. A 32-bit addition takes
under 15 micro-seconds, while simulating more than 1500 transistors. It can also be
integrated into an architectural simulator, which added a performance overhead of 10 to 26 percent to a simulation. The results obtained show that single faults cause an
error in an adder in less than 10 percent of the inputs.
PreFix brings together the fault models created using FaultSim and the design
directions found using REPAIR. PreFix performs re-execution of instructions on a
remote core, which pick up instructions to execute using a global instruction buffer.
Error prediction and detection are used to reduce the number of re-executed instructions.
PreFix has an area overhead of 3.5 percent in the setup used, and the performance
overhead is within 5 percent of a fault-free case. This dissertation shows that faults
in processors can be tolerated without explicitly switching off any component, and
minimal redundancy is sufficient to achieve the same
Miniaturized Transistors, Volume II
In this book, we aim to address the ever-advancing progress in microelectronic device scaling. Complementary Metal-Oxide-Semiconductor (CMOS) devices continue to endure miniaturization, irrespective of the seeming physical limitations, helped by advancing fabrication techniques. We observe that miniaturization does not always refer to the latest technology node for digital transistors. Rather, by applying novel materials and device geometries, a significant reduction in the size of microelectronic devices for a broad set of applications can be achieved. The achievements made in the scaling of devices for applications beyond digital logic (e.g., high power, optoelectronics, and sensors) are taking the forefront in microelectronic miniaturization. Furthermore, all these achievements are assisted by improvements in the simulation and modeling of the involved materials and device structures. In particular, process and device technology computer-aided design (TCAD) has become indispensable in the design cycle of novel devices and technologies. It is our sincere hope that the results provided in this Special Issue prove useful to scientists and engineers who find themselves at the forefront of this rapidly evolving and broadening field. Now, more than ever, it is essential to look for solutions to find the next disrupting technologies which will allow for transistor miniaturization well beyond silicon’s physical limits and the current state-of-the-art. This requires a broad attack, including studies of novel and innovative designs as well as emerging materials which are becoming more application-specific than ever before
Low Power Memory/Memristor Devices and Systems
This reprint focusses on achieving low-power computation using memristive devices. The topic was designed as a convenient reference point: it contains a mix of techniques starting from the fundamental manufacturing of memristive devices all the way to applications such as physically unclonable functions, and also covers perspectives on, e.g., in-memory computing, which is inextricably linked with emerging memory devices such as memristors. Finally, the reprint contains a few articles representing how other communities (from typical CMOS design to photonics) are fighting on their own fronts in the quest towards low-power computation, as a comparison with the memristor literature. We hope that readers will enjoy discovering the articles within
Reports to the President
A compilation of annual reports for the 1981-1982 academic year, including a report from the President of the Massachusetts Institute of Technology, as well as reports from the academic and administrative units of the Institute. The reports outline the year's goals, accomplishments, honors and awards, and future plans
Design of Multi-Gigabit Network Interconnect Elements and Protocols for a Data Acquisition System in Radiation Environments
Modern High Energy Physics experiments (HEP) explore the fundamental nature
of matter in more depth than ever before and thereby benefit greatly from the
advances in the field of communication technology. The huge data volumes
generated by the increasingly precise detector setups pose severe problems for
the Data Acquisition Systems (DAQ), which are used to process and store this
information. In addition, detector setups and their read-out electronics need
to be synchronized precisely to allow a later correlation of experiment events
accurately in time. Moreover, the substantial presence of charged particles from
accelerator-generated beams results in strong ionizing radiation levels, which has
a severe impact on the electronic systems.
This thesis recommends an architecture for unified network protocol IP cores
with custom developed physical interfaces for the use of reliable data acquisition
systems in strong radiation environments. Special configured serial bidirectional
point-to-point interconnects are proposed to realize high speed data transmission,
slow control access, synchronization and global clock distribution on unified links
to reduce costs and to gain compact and efficient read-out setups. Special features
are the developed radiation hardened functional units against single and multiple
bit upsets, and the common interface for statistical error and diagnosis information,
which integrates well into the protocol capabilities and eases the error handling in
large experiment setups. Many innovative designs for several custom FPGA and
ASIC platforms have been implemented and are described in detail. Special focus
is placed on the physical layers and network interface elements from high-speed
serial LVDS interconnects up to 20 Gb/s SSTL links in state-of-the-art process
technology.
The developed IP cores are fully tested by an adapted verification environment for
electronic design automation tools and also by live application. They are available
in a global repository allowing a broad usage within further HEP experiments
UTPA Undergraduate Catalog 1998-2000
https://scholarworks.utrgv.edu/edinburglegacycatalogs/1070/thumbnail.jp