252 research outputs found
A Survey on the Best Choice for Modulus of Residue Code
Nowadays, the development of technology and the growing need for dense and complex chips have led chip industries to increase their attention on the circuit testability. Also, using the electronic chips in certain industries, such as the space industry, makes the design of fault tolerant circuits a challenging issue. Coding is one of the most suitable methods for error detection and correction. The residue code, as one of the best choices for error detection aims, is wildly used in large arithmetic circuits such as multiplier and also finds a wide range of applications in processors and digital filters. The modulus value in this technique directly effect on the area overhead parameter. A large area overhead is one of the most important disadvantages especially for testing the small circuits. The purpose of this paper is to study and investigate the best choice for residue code check base that is used for simple and small circuits such as a simple ripple carry adder. The performances are evaluated by applying stuck-at-faults and transition-faults by simulators. The efficiency is defined based on fault coverage and normalized area overhead. The results show that the modulus 3 with 95% efficiency provided the best result. Residue code with this modulus for checking a ripple carry adder, in comparison with duplex circuit, 30% improves the efficiency
Design of a High-Speed Architecture for Stabilization of Video Captured Under Non-Uniform Lighting Conditions
Video captured in shaky conditions may lead to vibrations. A robust algorithm to immobilize the video by compensating for the vibrations from physical settings of the camera is presented in this dissertation. A very high performance hardware architecture on Field Programmable Gate Array (FPGA) technology is also developed for the implementation of the stabilization system. Stabilization of video sequences captured under non-uniform lighting conditions begins with a nonlinear enhancement process. This improves the visibility of the scene captured from physical sensing devices which have limited dynamic range. This physical limitation causes the saturated region of the image to shadow out the rest of the scene. It is therefore desirable to bring back a more uniform scene which eliminates the shadows to a certain extent. Stabilization of video requires the estimation of global motion parameters. By obtaining reliable background motion, the video can be spatially transformed to the reference sequence thereby eliminating the unintended motion of the camera.
A reflectance-illuminance model for video enhancement is used in this research work to improve the visibility and quality of the scene. With fast color space conversion, the computational complexity is reduced to a minimum. The basic video stabilization model is formulated and configured for hardware implementation. Such a model involves evaluation of reliable features for tracking, motion estimation, and affine transformation to map the display coordinates of a stabilized sequence. The multiplications, divisions and exponentiations are replaced by simple arithmetic and logic operations using improved log-domain computations in the hardware modules. On Xilinx\u27s Virtex II 2V8000-5 FPGA platform, the prototype system consumes 59% logic slices, 30% flip-flops, 34% lookup tables, 35% embedded RAMs and two ZBT frame buffers. The system is capable of rendering 180.9 million pixels per second (mpps) and consumes approximately 30.6 watts of power at 1.5 volts. With a 1024×1024 frame, the throughput is equivalent to 172 frames per second (fps).
Future work will optimize the performance-resource trade-off to meet the specific needs of the applications. It further extends the model for extraction and tracking of moving objects as our model inherently encapsulates the attributes of spatial distortion and motion prediction to reduce complexity. With these parameters to narrow down the processing range, it is possible to achieve a minimum of 20 fps on desktop computers with Intel Core 2 Duo or Quad Core CPUs and 2GB DDR2 memory without a dedicated hardware
Recommended from our members
Low-cost duplication for separable error detection in computer arithmetic
Low-cost arithmetic error detection will be necessary in the future to ensure correct and safe system operation. However, current error detection mechanisms for arithmetic either have high area and energy overheads or are complex and offer incomplete protection against errors. Full duplication is simple, strong, and separable, but often is prohibitively costly. Alternative techniques such as arithmetic error coding require lower hardware and energy overheads than full duplication, but they do so at the expense of high design effort and error coverage holes. The goal of this research is to mitigate the deficiencies of duplication and arithmetic error coding to form an error detection scheme that may be readily employed in future systems. The techniques described by this work use a general duplication technique that employs an alternate number system in the duplicate arithmetic unit. These novel dual modular redundancy organizations are referred to as low-cost duplication, and they provide compelling efficiency and coverage advantages over prior arithmetic error detection mechanisms.Electrical and Computer Engineerin
VLSI Architectures for WIMAX Channel Decoders
This chapter describes the main architectures proposed in the literature to
implement the channel decoders required by the WiMax standard, namely
convolutional codes, turbo codes (both block and convolutional) and LDPC. Then
it shows a complete design of a convolutional turbo code encoder/decoder system
for WiMax.Comment: To appear in the book "WIMAX, New Developments", M. Upena, D. Dalal,
Y. Kosta (Ed.), ISBN978-953-7619-53-
Configurable and Scalable Turbo Decoder for 4G Wireless Receivers
The increasing requirements of high data rates and quality of service (QoS) in fourth-generation (4G) wireless communication require the implementation of practical capacity approaching codes. In this chapter, the application of Turbo coding schemes that have recently been adopted in the IEEE 802.16e WiMax standard and 3GPP Long Term Evolution (LTE) standard are reviewed. In order to process several 4G wireless standards with a common hardware module, a reconfigurable and scalable Turbo decoder architecture is presented. A parallel Turbo decoding scheme with scalable parallelism tailored to the target throughput is applied to support high data rates in 4G applications. High-level decoding parallelism is achieved by employing contention-free interleavers. A multi-banked memory structure and routing network among memories and MAP decoders are designed to operate at full speed with parallel interleavers. A new on-line address generation technique is introduced to support multiple Turbo
interleaving patterns, which avoids the interleaver address memory that is typically necessary in the traditional designs. Design trade-offs in terms of area and power efficiency are analyzed for different parallelism and clock frequency goals
Recommended from our members
Concurrent error detection
Concurrent error detection (CED) is the detection of errors or faults in a circuit or data path concurrent with normal operation of that circuit. The general approach for CED is to calculate a check symbol for the inputs to the circuit under operation, predict the check symbol that will result for the output of the circuit for those inputs, and compare the predicted check symbol to the one that is actually calculated for the output. If the predicted and actual check symbols are different, an error or fault has been detected. The alternative to this check symbol prediction is to use a second copy of the circuit under operation and compare the results of the two circuits. For some classes of circuits the prediction of the output check symbol can require less circuitry than a second copy of the circuit being tested. Four examples of these types of circuits are examined in this dissertation: Arithmetic Logic Units (ALUs), array multipliers, self-synchronous scrambler-descrambler pairs with their intervening data path, and switch fabrics. Faults in integrated circuits tend to produce unidirectional errors. Unidirectional errors are those in which all of the errors are in the same direction (e.g., 0 to 1 errors) within the block of data covered by a given check symbol. For this reason, codes that are optimized for unidirectional errors are the focus of investigation for most of the applications. In particular, the Bose-Lin codes are examined for those applications where unidirectional errors are expected to be typical. In order to examine the performance of the Bose-Lin codes in one of these applications, it was necessary to determine the theoretical performance for Bose- Lin codes for error rates beyond what had been previously studied. This analysis of Bose-Lin codes with large numbers of "burst" errors also included a further generalization of the codes
Multiple bit error correcting architectures over finite fields
This thesis proposes techniques to mitigate multiple bit errors in GF arithmetic circuits. As GF arithmetic circuits such as multipliers constitute the complex and important functional unit of a crypto-processor, making them fault tolerant will improve the reliability of circuits that are employed in safety applications and the errors may cause catastrophe if not mitigated.
Firstly, a thorough literature review has been carried out. The merits of efficient schemes are carefully analyzed to study the space for improvement in error correction, area and power consumption.
Proposed error correction schemes include bit parallel ones using optimized BCH codes that are useful in applications where power and area are not prime concerns. The scheme is also extended to dynamically correcting scheme to reduce decoder delay. Other method that suits low power and area applications such as RFIDs and smart cards using cross parity codes is also proposed. The experimental evaluation shows that the proposed techniques can mitigate single and multiple bit errors with wider
error coverage compared to existing methods with lesser area and power consumption. The proposed scheme is used to mask the errors appearing at the output of the circuit irrespective of their cause.
This thesis also investigates the error mitigation schemes in emerging technologies (QCA, CNTFET) to compare area, power and delay with existing CMOS equivalent. Though the proposed novel multiple error correcting techniques can not ensure 100% error mitigation, inclusion of these techniques
to actual design can improve the reliability of the circuits or increase the difficulty in hacking crypto-devices. Proposed schemes can also be extended to non GF digital circuits
- …