51 research outputs found

    Partitioned Successive-Cancellation List Decoding of Polar Codes

    Full text link
    Successive-cancellation list (SCL) decoding is an algorithm that provides very good error-correction performance for polar codes. However, its hardware implementation requires a large amount of memory, mainly to store intermediate results. In this paper, a partitioned SCL algorithm is proposed to reduce the large memory requirements of the conventional SCL algorithm. The decoder tree is broken into partitions that are decoded separately. We show that with careful selection of list sizes and number of partitions, the proposed algorithm can outperform conventional SCL while requiring less memory.Comment: 4 pages, 6 figures, to appear at IEEE ICASSP 201

    Особенности программной реализации вычисления контрольной суммы CRC32 на примере PKZIP, WinZIP, ETHERNET

    Get PDF
    Обсуждаются алгоритмы вычисления контрольной суммы CRC32. Проанализированы описанные в литературе алгоритмы. Из имеющихся в свободном доступе исходных кодов восстановлен алгоритм вычисления контрольной суммы CRC32, который применяется на практике (в PKZIP, WinZIP, ETHERNET). На примере показано, что алгоритм, описанный в литературе и применяемый на практике, дают разные контрольные суммы

    Исследование программных реализаций табличного и матричного алгоритмов вычисления контрольной суммы CRC32

    Get PDF
    Приведено сравнение по времени вычисления и размеру исполняемых файлов программных реализаций табличного и матричных алгоритмов вычисления контрольной суммы CRC32, совместимой с контрольной суммой архиваторов PKZIP, WinZIP и протокола ETHERNET. Проведено полное исследование различных вариантов применения как с исходным буфером, так и с буфером, совместимым с реализацией на микроконтроллерах. Сделаны рекомендации относительно применения табличного и матричного алгоритмов

    Dependable Dynamic Partial Reconfiguration with minimal area & time overheads on Xilinx FPGAS

    Get PDF
    Thanks to their flexibility, FPGAs are nowadays widely used to implement digital systems' prototypes and, more frequently, their final releases. Reconfiguration traditionally required an external controller to upload contents in the FPGA. Dynamic Partial Reconfiguration (DPR) opens new horizons in FPGAs' applications, providing many new utilization paradigms, as it enables an FPGA to reconfigure itself: no external controller is required since it can be included in the FPGA. However, DPR also introduces reliability issues related to errors in the partial reconfiguration bitstreams. FPGA manufacturers currently provide solutions that are not efficient. In this paper new DfD (Design for Dependability) techniques are proposed. Exploiting information density of configuration data, they improve the performance while providing the same reliability characteristics as the previous one

    New Heuristic Model for Optimal CRC Polynomial

    Get PDF
    Cyclic Redundancy Codes (CRCs) are important for maintaining integrity in data transmissions. CRC performance is mainly affected by the polynomial chosen. Recent increases in data throughput require a foray into determining optimal polynomials through software or hardware implementations. Most CRC implementations in use, offer less than optimal performance or are inferior to their newer published counterparts. Classical approaches to determining optimal polynomials involve brute force based searching a population set of all possible polynomials in that set. This paper evaluates performance of CRC-polynomials generated with Genetic Algorithms. It then compares the resultant polynomials, both with and without encryption headers against a benchmark polynomial

    Analysis Of The Effectiveness Of Error Detection In Data Transmission Using Polynomial Code Method

    Get PDF
    Data transmitted from one location to the other has to be transferred reliably. Usually, error control coding algorithm provides the means to protect data from errors. Unfortunately, in many cases the physical link can not guarantee that all bits will be transferred without errors. It is then the responsibility of the error control algorithm to detect those errors and in some cases correct them so that upper layers will receive error free data. The polynomial code, also known as Cyclic Redundancy Code (CRC) is a very powerful and easily implemented technique to obtain data reliability. As data transfer rates and the amount of data stored increase, the need for simple and robust error detection codes should increase as well. Thus, it is important to be sure that the CRCs in use are as effective as possible. Unfortunately, standardized CRC polynomials such as the CRC-32 polynomial used in the Ethernet network standard are known to be grossly suboptimal for important applications, (Koopman, 2002). This research investigates the effectiveness of error detection methods in data transmission used several years ago when we had to do with small amount of data transfer and data storages compared with the huge amount of data we deal with nowadays.  A demonstration of erroneous bits in data frames that may not be detected by the CRC method will be shown. A corrective method to detect errors when dealing with humongous data transmission will also be given

    Selecting A Cyclic Redundancy Check (CRC) Generator Polynomial for CEH (CRC Extension Header).

    Get PDF
    Computation and regeneration of CRC code in each router may cause slower IPv6 packet transmission

    Evaluating Hamming Distance as a Metric for the Detection of CRC-based Side-channel Communications in MANETs

    Get PDF
    AbstractSide-channel communication is a form of traffic in which malicious parties communicate secretly over a wireless network. This is often established through the modification of Ethernet frame header fields, such as the Frame Check Sequence (FCS). The FCS is responsible for determining whether or not a frame has been corrupted in transmission, and contains a value calculated through the use of a predetermined polynomial. A malicious party may send messages that appear as nothing more than naturally corrupted noise on a network to those who are not the intended recipient. We use a metric known as Hamming distance in an attempt to differentiate purposely corrupted frames from naturally corrupted ones. In theory, it should be possible to recognize purposely corrupted frames based on how high this Hamming distance value is, as it signifies how many bits are different between the expected and the received FCS values. It is hypothesized that a range of threshold values based off of this metric exist, which may allow for the detection of side-channel communication across all scenarios. We ran an experiment with human subjects in a foot platoon formation and analyzed the data using a support vector machine. Our results show promise on the use of Hamming distance for side-channel detection in MANETs

    Security protocols for networks and Internet: a global vision

    Get PDF
    This work was supported by the MINECO grant TIN2013-46469-R (SPINY: Security and Privacy in the Internet of You), by the CAM grant S2013/ICE-3095 (CIBERDINE: Cybersecurity, Data, and Risks), which is co-funded by European Funds (FEDER), and by the MINECO grant TIN2016-79095-C2-2-R (SMOG-DEV—Security mechanisms for fog computing: advanced security for devices)
    corecore