241 research outputs found

    Improved Decoding of Staircase Codes: The Soft-aided Bit-marking (SABM) Algorithm

    Get PDF
    Staircase codes (SCCs) are typically decoded using iterative bounded-distance decoding (BDD) and hard decisions. In this paper, a novel decoding algorithm is proposed, which partially uses soft information from the channel. The proposed algorithm is based on marking certain number of highly reliable and highly unreliable bits. These marked bits are used to improve the miscorrection-detection capability of the SCC decoder and the error-correcting capability of BDD. For SCCs with 22-error-correcting Bose-Chaudhuri-Hocquenghem component codes, our algorithm improves upon standard SCC decoding by up to 0.300.30~dB at a bit-error rate (BER) of 10−710^{-7}. The proposed algorithm is shown to achieve almost half of the gain achievable by an idealized decoder with this structure. A complexity analysis based on the number of additional calls to the component BDD decoder shows that the relative complexity increase is only around 4%4\% at a BER of 10−410^{-4}. This additional complexity is shown to decrease as the channel quality improves. Our algorithm is also extended (with minor modifications) to product codes. The simulation results show that in this case, the algorithm offers gains of up to 0.440.44~dB at a BER of 10−810^{-8}.Comment: 10 pages, 12 figure

    A Soft-Aided Staircase Decoder Using Three-Level Channel Reliabilities

    Full text link
    The soft-aided bit-marking (SABM) algorithm is based on the idea of marking bits as highly reliable bits (HRBs), highly unreliable bits (HUBs), and uncertain bits to improve the performance of hard-decision (HD) decoders. The HRBs and HUBs are used to assist the HD decoders to prevent miscorrections and to decode those originally uncorrectable cases via bit flipping (BF), respectively. In this paper, an improved SABM algorithm (called iSABM) is proposed for staircase codes (SCCs). Similar to the SABM, iSABM marks bits with the help of channel reliabilities, i.e., using the absolute values of the log-likelihood ratios. The improvements offered by iSABM include: (i) HUBs being classified using a reliability threshold, (ii) BF randomly selecting HUBs, and (iii) soft-aided decoding over multiple SCC blocks. The decoding complexity of iSABM is comparable of that of SABM. This is due to the fact that on the one hand no sorting is required (lower complexity) because of the use of a threshold for HUBs, while on the other hand multiple SCC blocks use soft information (higher complexity). Additional gains of up to 0.53 dB with respect to SABM and 0.91 dB with respect to standard SCC decoding at a bit error rate of 10−610^{-6} are reported. Furthermore, it is shown that using 1-bit reliability marking, i.e., only having HRBs and HUBs, only causes a gain penalty of up to 0.25 dB with a significantly reduced memory requirement

    Improving Group Integrity of Tags in RFID Systems

    Get PDF
    Checking the integrity of groups containing radio frequency identification (RFID) tagged objects or recovering the tag identifiers of missing objects is important in many activities. Several autonomous checking methods have been proposed for increasing the capability of recovering missing tag identifiers without external systems. This has been achieved by treating a group of tag identifiers (IDs) as packet symbols encoded and decoded in a way similar to that in binary erasure channels (BECs). Redundant data are required to be written into the limited memory space of RFID tags in order to enable the decoding process. In this thesis, the group integrity of passive tags in RFID systems is specifically targeted, with novel mechanisms being proposed to improve upon the current state of the art. Due to the sparseness property of low density parity check (LDPC) codes and the mitigation of the progressive edge-growth (PEG) method for short cycles, the research is begun with the use of the PEG method in RFID systems to construct the parity check matrix of LDPC codes in order to increase the recovery capabilities with reduced memory consumption. It is shown that the PEG-based method achieves significant recovery enhancements compared to other methods with the same or less memory overheads. The decoding complexity of the PEG-based LDPC codes is optimised using an improved hybrid iterative/Gaussian decoding algorithm which includes an early stopping criterion. The relative complexities of the improved algorithm are extensively analysed and evaluated, both in terms of decoding time and the number of operations required. It is demonstrated that the improved algorithm considerably reduces the operational complexity and thus the time of the full Gaussian decoding algorithm for small to medium amounts of missing tags. The joint use of the two decoding components is also adapted in order to avoid the iterative decoding when the missing amount is larger than a threshold. The optimum value of the threshold value is investigated through empirical analysis. It is shown that the adaptive algorithm is very efficient in decreasing the average decoding time of the improved algorithm for large amounts of missing tags where the iterative decoding fails to recover any missing tag. The recovery performances of various short-length irregular PEG-based LDPC codes constructed with different variable degree sequences are analysed and evaluated. It is demonstrated that the irregular codes exhibit significant recovery enhancements compared to the regular ones in the region where the iterative decoding is successful. However, their performances are degraded in the region where the iterative decoding can recover some missing tags. Finally, a novel protocol called the Redundant Information Collection (RIC) protocol is designed to filter and collect redundant tag information. It is based on a Bloom filter (BF) that efficiently filters the redundant tag information at the tag’s side, thereby considerably decreasing the communication cost and consequently, the collection time. It is shown that the novel protocol outperforms existing possible solutions by saving from 37% to 84% of the collection time, which is nearly four times the lower bound. This characteristic makes the RIC protocol a promising candidate for collecting redundant tag information in the group integrity of tags in RFID systems and other similar ones

    Very low bit rate parametric audio coding

    Get PDF
    [no abstract

    Signal Design and Machine Learning Assisted Nonlinearity Compensation for Coherent Optical Fibre Communication Links

    Get PDF
    This thesis investigates low-complexity digital signal processing (DSP) for signal design and nonlinearity compensation strategies to improve the performance of single-mode optical fibre links over different distance scales. The performance of a novel ML-assisted inverse regular perturbation technique that mitigates fibre nonlinearities was investigated numerically with a dual-polarization 64 quadrature amplitude modulation (QAM) link over 800 km distance. The model outperformed the heuristically-optimised digital backpropagation approach with <5 steps per span and mitigated the gain expansion issue, which limits the accuracy of an untrained model when the balance between the nonlinear and linear components becomes considerable. For short reach links, the phase noise due to low-cost, high-linewidth lasers is a more significant channel impairment. A novel constellation optimisation algorithm was, therefore, proposed to design modulation formats that are robust against both additive white Gaussian noise (AWGN) and the residual laser phase noise (i.e., after carrier phase estimation). Subsequently, these constellations were numerically validated in the context of a 400ZR standard system, and achieved up to 1.2 dB gains in comparison with the modulation formats which were optimised only for the AWGN channel. The thesis concludes by examining a joint strategy to modulate and demodulate signals in a partially-coherent AWGN (PCAWGN) channel. With a low-complexity PCAWGN demapper, 8- to 64-ary modulation formats were designed and validated through numerical simulations. The bit-wise achievable information rates (AIR) and post forward error correction (FEC) bit error rates (BER) of the designed constellations were numerically validated with: the theoretically optimum, Euclidean (conventional), and low-complexity PCAWGN demappers. The resulting constellations demonstrated post-FEC BER shaping gains of up to 2.59 dB and 2.19 dB versus uniform 64 QAM and 64-ary constellations shaped for the purely AWGN channel model, respectively. The described geometric shaping strategies can be used to either relax linewidth and/or carrier phase estimator requirements, or to increase signal-to-noise ratio (SNR) tolerance of a system in the presence of residual phase noise

    Application and Theory of Multimedia Signal Processing Using Machine Learning or Advanced Methods

    Get PDF
    This Special Issue is a book composed by collecting documents published through peer review on the research of various advanced technologies related to applications and theories of signal processing for multimedia systems using ML or advanced methods. Multimedia signals include image, video, audio, character recognition and optimization of communication channels for networks. The specific contents included in this book are data hiding, encryption, object detection, image classification, and character recognition. Academics and colleagues who are interested in these topics will find it interesting to read

    Deep Space Telecommunications Systems Engineering

    Get PDF
    Descriptive and analytical information useful for the optimal design, specification, and performance evaluation of deep space telecommunications systems is presented. Telemetry, tracking, and command systems, receiver design, spacecraft antennas, frequency selection, interference, and modulation techniques are addressed

    Digital audio watermarking for broadcast monitoring and content identification

    Get PDF
    Copyright legislation was prompted exactly 300 years ago by a desire to protect authors against exploitation of their work by others. With regard to modern content owners, Digital Rights Management (DRM) issues have become very important since the advent of the Internet. Piracy, or illegal copying, costs content owners billions of dollars every year. DRM is just one tool that can assist content owners in exercising their rights. Two categories of DRM technologies have evolved in digital signal processing recently, namely digital fingerprinting and digital watermarking. One area of Copyright that is consistently overlooked in DRM developments is 'Public Performance'. The research described in this thesis analysed the administration of public performance rights within the music industry in general, with specific focus on the collective rights and broadcasting sectors in Ireland. Limitations in the administration of artists' rights were identified. The impact of these limitations on the careers of developing artists was evaluated. A digital audio watermarking scheme is proposed that would meet the requirements of both the broadcast and collective rights sectors. The goal of the scheme is to embed a standard identifier within an audio signal via modification of its spectral properties in such a way that it would be robust and perceptually transparent. Modification of the audio signal spectrum was attempted in a variety of ways. A method based on a super-resolution frequency identification technique was found to be most effective. The watermarking scheme was evaluated for robustness and found to be extremely effective in recovering embedded watermarks in music signals using a semi-blind decoding process. The final digital audio watermarking algorithm proposed facilitates the development of other applications in the domain of broadcast monitoring for the purposes of equitable royalty distribution along with additional applications and extension to other domains
    • …
    corecore