14 research outputs found

    A Survey on the Best Choice for Modulus of Residue Code

    Get PDF
    Nowadays, the development of technology and the growing need for dense and complex chips have led chip industries to increase their attention on the circuit testability. Also, using the electronic chips in certain industries, such as the space industry, makes the design of fault tolerant circuits a challenging issue. Coding is one of the most suitable methods for error detection and correction. The residue code, as one of the best choices for error detection aims, is wildly used in large arithmetic circuits such as multiplier and also finds a wide range of applications in processors and digital filters. The modulus value in this technique directly effect on the area overhead parameter. A large area overhead is one of the most important disadvantages especially for testing the small circuits. The purpose of this paper is to study and investigate the best choice for residue code check base that is used for simple and small circuits such as a simple ripple carry adder. The performances are evaluated by applying stuck-at-faults and transition-faults by simulators. The efficiency is defined based on fault coverage and normalized area overhead. The results show that the modulus 3 with 95% efficiency provided the best result. Residue code with this modulus for checking a ripple carry adder, in comparison with duplex circuit, 30% improves the efficiency

    Analyzing the impact of supporting out-of-order communication on in-order performance with iWARP

    Full text link

    Read Bulk Data From Computational RFIDs

    Full text link

    Computation of cyclic redundancy checks via table look-up

    No full text

    A Study on Techniques for Handling Transmission Error of IPV6 Packets over Fmer Optic Links

    Get PDF
    Problem identification of the existing error control mechanism is very important to find out a new suitable design to solve the problem of ineffective error control.The identification results become main basic of designing a new mechanism.Hence, the design obtained truly solves the problem accurately

    HARDWARE INSTRUCTION BASED CRC32C, A BETTER ALTERNATIVE TO THE TCP ONEā€™S COMPLEMENT CHECKSUM

    Get PDF
    End-to-end data integrity is of utmost importance when sending data through a communication network, and a common way to ensure this is by appending a few bits for error detection (e.g., a checksum or cyclic redundancy check) to the data sent. Data can be corrupted at the sending or receiving hosts, in one of the intermediate systems (e.g., routers and switches), in the network interface card, or on the transmission link. The Internetā€™s Transmission Control Protocol (TCP) uses a 16-bit oneā€™s complement checksum for end-to-end error detection of each TCP segment [1]. The TCP protocol specification dates back to the 1970s, and better error detection alternatives exist (e.g., Fletcher checksum, Adler checksum, Cyclic Redundancy Check (CRC)) that provide higher error detection efficiency; nevertheless, the oneā€™s complement checksum is still in use today as part of the TCP standard. The TCP checksum has low computational complexity when compared to software implementations of the other algorithms. Some of the original reasons for selecting the 16-bit oneā€™s complement checksum are its simple calculation, and the property that its computation on big- and little-endian machines result in the same checksum but byte-swapped. This latter characteristic is not true for a twoā€™s complement checksum. A negative characteristic of oneā€™s and twoā€™s complement checksums is that changing the order of the data does not affect the checksum. In [2], the authors collected two years of data and concluded after analysis that the TCP checksum ā€œwill fail to detect errors for roughly one in 16 million to 10 billion packets.ā€ While some of the sources responsible for TCP checksum errors have decreased in the nearly 20 years since this study was published (e.g., the ACK-of-FIN TCP software bug), it is not clear what we would find if the study were repeated. It would also be difficult to repeat this study today because of privacy concerns. The advent of hardware CRC32C instructions on Intel x86 and ARM CPUs offers the promise of significantly improved error detection (probability of undetected errors proportional to 2 -32 versus 2-16) at a comparable CPU time to the oneā€™s complement checksum. The goal of this research is to compare the execution time of the following error detection algorithms: CRC32C (using generator polynomial 0x1EDC6F41), Adler checksum, Fletcher check sum, and oneā€™s complement checksum using both software and special hardware instructions. For CRC32C, the software implementations tested were bit-wise, nibble-wise, byte-wise, slicing-by-4 and slicing-by-8 algorithms. Intelā€™s CRC32 and PCLMULQDQ instructions and ARMā€™s CRC32C instruction were also used as part of testing hardware instruction implementations. A comparative study of all these algorithms on Intel Core i3-2330M shows that the CRC32C hardware instruction implementation is approximately 38% faster than the 16-bit TCP oneā€™s complement checksum at 1500 bytes, and the 16-bit TCP oneā€™s complement checksum is roughly 11% faster than the hardware instruction based CRC32C at 64 bytes. On the ARM Cortex-A53, the hardware CRC32C algorithm is approximately 20% faster than the 16-bit TCP oneā€™s complement checksum at 64 bytes, and the 16-bit TCP oneā€™s complement checksum is roughly 13% faster than the hardware instruction based CRC32C at 1500 bytes. Because the hardware CRC32C instruction is commonly available on most Intel processors and a growing number of ARM processors these days, we argue that it is time to reconsider adding a TCP Option to use hardware CRC32C. The primary impediments to replacing the TCP oneā€™s complement checksum with CRC32C are Network Address Translation (NAT) and TCP checksum offload. NAT requires the recalculation of the TCP checksum in the NAT device because the IPv4 address, and possibly the TCP port number change, when packets move through a NAT device. These NAT devices are able to compute the new checksum incrementally due to the properties of the oneā€™s complement checksum. The eventual transition to IPv6 will hopefully eliminate the need for NAT. Most Ethernet Network Interface Cards (NIC) support TCP checksum offload, where the TCP checksum is computed in the NIC rather than on the host CPU. There is a risk of undetected errors with this approach since the error detection is no longer end-to-end; nevertheless, it is the default configuration in many operating systems including Windows 10 [3] and MacOS. CRC32C has been implemented in some NICs to support the iSCSI protocol, so it is possible that TCP CRC32C offload could be supported in the future. In the near term, our proposal is to include a TCP Option for CRC32C in addition to the oneā€™s complement checksum for higher reliability

    High-Performance Hardware and Software Implementations of the Cyclic Redundancy Check Computation

    Get PDF
    The Cyclic Redundancy Check (CRC) is an error detection code used in many digital transmission and storage systems. The two major research areas surrounding CRCs concern developing computation approaches and studying error detection properties. This thesis aims to explore the various aspects of the CRC computation, with the primary objective being to propose novel computation approaches which outperform the existing ones. The work begins with a thorough examination of the formulations found throughout the literature. Then, their subsequent realizations as hardware architectures and software algorithms are investigated. During this investigation, some improvements are presented including optimizations of the state-space transĀ­ formed and primitive architectures. Afterward, novel formulations are derived and the most significant contribution consists of a matrix decomposition that gives rise to a high-performance software algorithm. Simulation and implementation results are gathered for both hardware and software deployments of the investigated computaĀ­ tion approaches. The theoretical results obtained by simulations are validated with implementation experiments. The proposed algorithm is shown to outperform the existing comparable low-memory algorithm in terms of time complexity

    A novel travelling-wave Zeeman decelerator for production of cold radicals

    Get PDF
    Recent advances in producing samples of molecules at very low temperatures have been motivated by the prospects of studying collisions and chemical reactions with controllable collision energies, performing high resolution spectroscopy and precision measurements for fundamental physics, quantum information processing and quantum simulation. Methods based on the deceleration of supersonic molecular beams are particularly well suited for collision experiments since the final longitudinal velocity of the sample can be tuned over a wide range with narrow velocity spreads. Zeeman deceleration methods rely on the state-dependent interaction of neutral paramagnetic atoms or molecules with a time-dependent inhomogeneous magnetic fields. For this reason, the Zeeman deceleration technique is especially effective in open-shell systems such as molecular radicals or metastable atoms and molecules. Here, an experimental realization of a novel travelling-wave Zeeman decelerator based on a double-helix wire geometry is presented. The decelerator is capable of decelerating samples of paramagnetic atoms and molecules from 560 m/s forward velocity down to an arbitrary final velocity. Compared to the conventional Zeeman or Stark decelerators, the presented decelerator exhibits full three-dimensional confinement of the molecules at a full range of velocities starting from the initial forward velocity down to the arbitrary final velocity, leading to an improvement of the overall phase-space acceptance compared to the conventional Zeeman and Stark decelerators. Operation of the decelerator is demonstrated by deceleration of a molecular beam of OH radicals from an initial velocity of 445 m/s down to a final velocity of 350 m/s. The experimental results are accompanied by numerical trajectory simulations confirming stable operation and showing phase-space stability of the decelerator. These results pave the way for the future cold-collision experiments. In the future, the traveling-wave Zeeman decelerator will serve as a source of cold paramagnetic molecules for hybrid trapping experiments

    BioVault : a protocol to prevent replay in biometric systems

    Get PDF
    D.Com. (Informatics)Please refer to full text to view abstrac
    corecore