4 research outputs found

    AN ERROR PRONE DIGITAL FILTERS BY APPLYING CODING FORMULATIONS

    Get PDF
    Within this ECC-based plan, the coding from the redundant filters is dependent on simple additions that switch the XOR binary operations in traditional ECCs. However, since both inputs and outputs from the filters are sequences of figures, a far more general coding may be used. Particularly, soft errors are an essential issue, and lots of techniques happen to be suggested through the years to mitigate them. The security of parallel filters only has been lately considered. This brief studies the security of parallel filters using more general coding techniques. Particularly, a vital difference with ECCs is the fact that both filter inputs and outputs are figures. To identify and proper errors, each filter may very well be a little within an ECC, and redundant filters can be included to form parity check bits. This differs from the approach suggested within this brief, where inputs are encoded however the processing from the filters isn't modified. ECC-based plan cuts down on the protection overhead compared by using TMR. The input signals are encoded utilizing a matrix with arbitrary coefficients to create the signals that go into the four original and 2 redundant filters. To simplify the implementation, individual’s rows must have values that minimize the complexness of multiplications and the rise in the dynamic range within the redundant filters. The sensible implementation was highlighted with two situation studies which were evaluated to have an FPGA implementation and in contrast to a formerly suggested technique. That technique depends on using ECCs so that each filter is treated like a bit within the ECC. The outcomes reveal that the suggested plan outperforms the ECC technique (lower costs achieving similar fault-tolerant capacity). Therefore, the suggested technique could be helpful to apply fault tolerant parallel filters

    Protection of “Fault Tolerant Parallel Filters” by Hamming code with Reversible logic

    Get PDF
    Digital filters are widely used in signal processing and communication systems. In some cases, the reliability of those systems is critical, and fault tolerant filter implementations are needed. Over the years, many techniques that exploit the filters’ structure and properties to achieve fault tolerance have been proposed. As technology scales, it enables more complex systems that incorporate many filters. In those complex systems, it is common that some of the filters operate in parallel, for example, by applying the same filter to different input signals. . The complexity occurs while decoding the received encoded data. More often the transmitted data is subjected to the channel noise which influences the original signal. To overcome this problem many error correction codes (ECC’s) are introduced.Recently, a simple technique that exploits the presence of parallel filters to achieve fault tolerance has been presented In this paper we proposed an error detection and correction code called hamming code. The hamming code not only detects the errors as conventional codes but also it is able to correct the data. In addition the process is supported with  reversible gate logic. This is the updated design methodology to reduce the power consumption and complexity. Reversible computing will also lead to improvement in energy efficiency. Energy efficiency will fundamentally affect the speed of circuits such as nano-circuits and therefore the speed of most computing applications. To increase the portability of devices again reversible computing is required. This idea is generalized to show that parallel filters can be protected using error correction codes (ECCs) in which each filter is the equivalent of a bit in a traditional ECC. This new scheme allows more efficient protection when the number of parallel filters is large. The technique is evaluated using a case study of parallel finite impulse response filters showing the effectiveness in terms of protection and implementation cost

    High-Efficiency Soft Error-Tolerant Digital Signal Processing Using Fine-Grain Subword-Detection Processing

    No full text
    [[abstract]]The soft error problem in digital circuits is becoming increasingly important as the IC fabrication technology progresses from the deep submicrometer scale to the nanometer scale. This paper proposes a subword-detection processing (SDP) technique and a fine-grain soft-error-tolerance (FGSET) architecture to improve the performance of the digital signal processing circuit. In the SDP technique, the logic masking property of the soft error in the combinational circuit is utilized to mask the single-event upset (SEU) caused by disturbing particles in the inactive area. To further improve the performance, the masked portion of the datapath can be used as the estimation redundancy in the algorithmic soft-error-tolerance (ASET) technique. This technique is called subword-detection and redundant processing (SDRP). In the FGSET architecture, the soft error in each processing element (fine grain) can be recovered by the arithmetic datapath-level ASET technique. Analysis of the fast Fourier transform processor example shows that the proposed FGSET architecture can improve the performance of the coarse-grain SET (CGSET) by 8.5 dB. The low-cost SDP technique (1.03times times) yields a noise reduction of 5.3 dB over the CGSET approach (1.40 times times), while the efficient SDRP I (1.57times times) and SDRP II (1.88times times ) techniques outperform the CGSET approach by 24.5 and 30.5 dB, respectively.[[fileno]]2030134010003[[department]]é›»æ©Ÿć·„çš‹ć­ž

    Hardware Considerations for Signal Processing Systems: A Step Toward the Unconventional.

    Full text link
    As we progress into the future, signal processing algorithms are becoming more computationally intensive and power hungry while the desire for mobile products and low power devices is also increasing. An integrated ASIC solution is one of the primary ways chip developers can improve performance and add functionality while keeping the power budget low. This work discusses ASIC hardware for both conventional and unconventional signal processing systems, and how integration, error resilience, emerging devices, and new algorithms can be leveraged by signal processing systems to further improve performance and enable new applications. Specifically this work presents three case studies: 1) a conventional and highly parallel mix signal cross-correlator ASIC for a weather satellite performing real-time synthetic aperture imaging, 2) an unconventional native stochastic computing architecture enabled by memristors, and 3) two unconventional sparse neural network ASICs for feature extraction and object classification. As improvements from technology scaling alone slow down, and the demand for energy efficient mobile electronics increases, such optimization techniques at the device, circuit, and system level will become more critical to advance signal processing capabilities in the future.PhDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/116685/1/knagphil_1.pd
    corecore