744 research outputs found

    Bit-by-bit iterative decoding expedites the convergence of Repeat Accumulate decoders

    No full text
    In this paper, we propose bit-by-bit iterative decoding for expediting the convergence of Repeat Accumulate (RA) decoders. In a conventional RA decoder, the repeat and accumulate component decoders are operated iteratively, in order to facilitate near-capacity communication. However, whenever one decoder is activated, the other is kept idle. The outputs of the active component decoder are stored until its operation is completed, whereupon the outputs are forwarded to the other decoder all at once and the activation of the decoders is swapped. The proposed bit-by-bit RA decoder expedites this process by allowing both component decoders to operate simultaneously, continuously exchanging outputs without buffering. We present both EXtrinsic Information Transfer (EXIT) charts and Bit Error Ratio (BER) results, which demonstrate that the proposed bit-by-bit RA decoder requires fewer decoding iterations to converge, at the cost of a slightly increased complexity per decoding iteration. Overall, we demonstrate that in a range of practical scenarios, the proposed bit-by-bit RA offers gains of up to 0.86 dB, without imposing any additional decoding complexity and without requiring any additional transmission-energy, -bandwidth or -duration

    Dynamic consolidation of superhard materials

    Get PDF
    Shock consolidation experiments were conducted via flyer impact on synthetic diamond (6–12 μm) and cubic boron nitride (c-BN) (4–8 μm) admixed with SiC whisker (SCW), Si_3N_4 whisker (SNW), SiC powder, and Si powder contained in stainless steel capsules under the shock pressure range of 10–30 GPa. Scanning electron microscopy and transmission electron microscopy imaging of the samples revealed no plastic deformation or melting of diamond and virtually no deformation of c-BN, whereas the SCW and SNW were extensively melted and recrystallized into bundle-shaped crystallites. In contrast, SiC powder mixed with diamond was also melted but demonstrated equant grain growth. A new method to calculate the shock temperature and melt fraction is formulated on the basis of Milewski's sphere-rod packing data. The new method assigns excess bulk volume to the zone around whiskers and yields a better description of the energy deposition mechanism of the consolidation of powder-whisker systems. Some of the experiments employed Sawaoka's post-shock annealing technique, in which the sample is sandwiched between two layers of a mixture of titanium powder plus carbon. Very well consolidated samples were obtained with post-shock heating under shock pressures of only about 11 GPa. Micro-Vickers hardness values up to 27 GPa were obtained for c-BN plus SCW at a low impact velocity of 1.45 km/s with post-shock heating. This hardness is similar to that obtained at a higher impact velocity of 1.95 km/s without post-shock heating. To understand the post-shock heating process, one-dimensional time dependent temperature profile calculations were conducted for the sample and Ti + C layers. Post-shock heating appears to be very important in the consolidation of powder and whisker admixture. The calculated optimum Ti + C thickness is about 0.8–1.7 mm at a porosity of 40% for a typical sample thickness of 2 mm. The heating and cooling time is a few milliseconds. Good compacts with micro-Vickers hardness values up to 28 GPa were also obtained upon shock consolidation of diamond plus Si admixtures

    Adaptive iterative decoding for expediting the convergence of unary error correction codes

    No full text
    Multimedia encoders typically generate symbols having a wide range of legitimate values. In practical mobile wireless scenarios, the transmission of these symbols is required to be bandwidth efficient and error resilient, motivating both source coding and channel coding. However, Separate Source and Channel Coding (SSCC) schemes are typically unable to exploit the residual redundancy in the source symbols, which cannot be totally reduced by finite-delay, finite-complexity schemes, hence resulting in a capacity loss. Until recently, none of the existing Joint Source and Channel Codes (JSCCs) were suitable for this application, since their decoding complexity increases rapidly with the size of the symbol alphabet. Motivated by this, we proposed a novel JSCC referred to as the Unary Error Correction (UEC) code, which is capable of exploiting all residual redundancy and eliminating any capacity loss, while imposing only a moderate decoding complexity. In this paper, we show that the operation of the UEC decoder can be dynamically adapted, in order to strike an attractive trade-off between its decoding complexity and its error correction capability. Furthermore, we conceive the corresponding Three Dimensional (3D) EXtrinsic Information Transfer (EXIT) charts for controlling this dynamic adaptation, as well as the decoder activation order, when the UEC code is serially concatenated with a turbo code. In this way, we expedite iterative decoding convergence, facilitating a gain of up to 1:2 dB compared to both SSCC and to its non-adaptive UEC benchmarkers, while maintaining the same transmission bandwidth, duration, energy and decoding complexity

    Adaptive iterative detection for expediting the convergence of a serially concatenated unary error correction decoder, turbo decoder and an iterative demodulator

    No full text
    Unary Error Correction (UEC) codes constitute a recently proposed Joint Source and Channel Code (JSCC) family, conceived for alphabets having an infinite cardinality, whilst out-performing previously used Separate Source and Channel Codes (SSCCs). UEC based schemes rely on an iterative decoding process, which involves three decoding blocks when concatenated with a turbo code. Owing to this, following the activation of one of the three blocks, the next block to be activated must be chosen from the other two decoding block options. Furthermore, the UEC decoder offers a number of decoding options, allowing its complexity and error correction capability to be dynamically adjusted. It has been shown that iterative decoding convergence can be expedited by activating the specific decoding option that offers the highest Mutual Information (MI) improvement to computational complexity ratio. This paper introduces an iterative demodulator, which is shown to improve the associated error correction performance, while reducing the overall iterative decoding complexity. The challenge is that the iterative demodulator has to forward its soft-information to the other two iterative decoding blocks, and hence the corresponding MI improvements cannot be compared on a like-for-like basis. Additionally, we also propose a method of eliminating the logarithmic calculations from the adaptive iterative decoding algorithm, hence further reducing its implementational complexity without impacting its error correcting performance
    • …
    corecore