111 research outputs found

    Stochastic Digital Backpropagation with Residual Memory Compensation

    Full text link
    Stochastic digital backpropagation (SDBP) is an extension of digital backpropagation (DBP) and is based on the maximum a posteriori principle. SDBP takes into account noise from the optical amplifiers in addition to handling deterministic linear and nonlinear impairments. The decisions in SDBP are taken on a symbol-by-symbol (SBS) basis, ignoring any residual memory, which may be present due to non-optimal processing in SDBP. In this paper, we extend SDBP to account for memory between symbols. In particular, two different methods are proposed: a Viterbi algorithm (VA) and a decision directed approach. Symbol error rate (SER) for memory-based SDBP is significantly lower than the previously proposed SBS-SDBP. For inline dispersion-managed links, the VA-SDBP has up to 10 and 14 times lower SER than DBP for QPSK and 16-QAM, respectively.Comment: 7 pages, accepted to publication in 'Journal of Lightwave Technology (JLT)

    Reduced Complexity Sequential Monte Carlo Algorithms for Blind Receivers

    Get PDF
    Monte Carlo algorithms can be used to estimate the state of a system given relative observations. In this dissertation, these algorithms are applied to physical layer communications system models to estimate channel state information, to obtain soft information about transmitted symbols or multiple access interference, or to obtain estimates of all of these by joint estimation. Initially, we develop and analyze a multiple access technique utilizing mutually orthogonal complementary sets (MOCS) of sequences. These codes deliberately introduce inter-chip interference, which is naturally eliminated during processing at the receiver. However, channel impairments can destroy their orthogonality properties and additional processing becomes necessary. We utilize Monte Carlo algorithms to perform joint channel and symbol estimation for systems utilizing MOCS sequences as spreading codes. We apply Rao-Blackwellization to reduce the required number of particles. However, dense signaling constellations, multiuser environments, and the interchannel interference introduced by the spreading codes all increase the dimensionality of the symbol state space significantly. A full maximum likelihood solution is computationally expensive and generally not practical. However, obtaining the optimum solution is critical, and looking at only a part of the symbol space is generally not a good solution. We have sought algorithms that would guarantee that the correct transmitted symbol is considered, while only sampling a portion of the full symbol space. The performance of the proposed method is comparable to the Maximum Likelihood (ML) algorithm. While the computational complexity of ML increases exponentially with the dimensionality of the problem, the complexity of our approach increases only quadratically. Markovian structures such as the one imposed by MOCS spreading sequences can be seen in other physical layer structures as well. We have applied this partitioning approach with some modification to blind equalization of frequency selective fading channel and to multiple-input multiple output receivers that track channel changes. Additionally, we develop a method that obtains a metric for quantifying the convergence rate of Monte Carlo algorithms. Our approach yields an eigenvalue based method that is useful in identifying sources of slow convergence and estimation inaccuracy.Ph.D.Committee Chair: Douglas B. Williams; Committee Member: Brani Vidakovic; Committee Member: G. Tong zhou; Committee Member: Gordon Stuber; Committee Member: James H. McClella

    On the Impact of Phase Noise in Communication Systems –- Performance Analysis and Algorithms

    Get PDF
    The mobile industry is preparing to scale up the network capacity by a factor of 1000x in order to cope with the staggering growth in mobile traffic. As a consequence, there is a tremendous pressure on the network infrastructure, where more cost-effective, flexible, high speed connectivity solutions are being sought for. In this regard, massive multiple-input multiple-output (MIMO) systems, and millimeter-wave communication systems are new physical layer technologies, which promise to facilitate the 1000 fold increase in network capacity. However, these technologies are extremely prone to hardware impairments like phase noise caused by noisy oscillators. Furthermore, wireless backhaul networks are an effective solution to transport data by using high-order signal constellations, which are also susceptible to phase noise impairments. Analyzing the performance of wireless communication systems impaired by oscillator phase noise, and designing systems to operate efficiently in strong phase noise conditions are critical problems in communication theory. The criticality of these problems is accentuated with the growing interest in new physical layer technologies, and the deployment of wireless backhaul networks. This forms the main motivation for this thesis where we analyze the impact of phase noise on the system performance, and we also design algorithms in order to mitigate phase noise and its effects. First, we address the problem of maximum a posteriori (MAP) detection of data in the presence of strong phase noise in single-antenna systems. This is achieved by designing a low-complexity joint phase-estimator data-detector. We show that the proposed method outperforms existing detectors, especially when high order signal constellations are used. Then, in order to further improve system performance, we consider the problem of optimizing signal constellations for transmission over channels impaired by phase noise. Specifically, we design signal constellations such that the error rate performance of the system is minimized, and the information rate of the system is maximized. We observe that these optimized constellations significantly improve the system performance, when compared to conventional constellations, and those proposed in the literature. Next, we derive the MAP symbol detector for a MIMO system where each antenna at the transceiver has its own oscillator. We propose three suboptimal, low-complexity algorithms for approximately implementing the MAP symbol detector, which involve joint phase noise estimation and data detection. We observe that the proposed techniques significantly outperform the other algorithms in prior works. Finally, we study the impact of phase noise on the performance of a massive MIMO system, where we analyze both uplink and downlink performances. Based on rigorous analyses of the achievable rates, we provide interesting insights for the following question: how should oscillators be connected to the antennas at a base station, which employs a large number of antennas

    Enhanced coding, clock recovery and detection for a magnetic credit card

    Get PDF
    Merged with duplicate record 10026.1/2299 on 03.04.2017 by CS (TIS)This thesis describes the background, investigation and construction of a system for storing data on the magnetic stripe of a standard three-inch plastic credit in: inch card. Investigation shows that the information storage limit within a 3.375 in by 0.11 in rectangle of the stripe is bounded to about 20 kBytes. Practical issues limit the data storage to around 300 Bytes with a low raw error rate: a four-fold density increase over the standard. Removal of the timing jitter (that is prob-' ably caused by the magnetic medium particle size) would increase the limit to 1500 Bytes with no other system changes. This is enough capacity for either a small digital passport photograph or a digitized signature: making it possible to remove printed versions from the surface of the card. To achieve even these modest gains has required the development of a new variable rate code that is more resilient to timing errors than other codes in its efficiency class. The tabulation of the effects of timing errors required the construction of a new code metric and self-recovering decoders. In addition, a new method of timing recovery, based on the signal 'snatches' has been invented to increase the rapidity with which a Bayesian decoder can track the changing velocity of a hand-swiped card. The timing recovery and Bayesian detector have been integrated into one computation (software) unit that is self-contained and can decode a general class of (d, k) constrained codes. Additionally, the unit has a signal truncation mechanism to alleviate some of the effects of non-linear distortion that are present when a magnetic card is read with a magneto-resistive magnetic sensor that has been driven beyond its bias magnetization. While the storage density is low and the total storage capacity is meagre in comparison with contemporary storage devices, the high density card may still have a niche role to play in society. Nevertheless, in the face of the Smart card its long term outlook is uncertain. However, several areas of coding and detection under short-duration extreme conditions have brought new decoding methods to light. The scope of these methods is not limited just to the credit card

    On Coding and Detection Techniques for Two-Dimensional Magnetic Recording

    Get PDF
    Edited version embargoed until 15.04.2020 Full version: Access restricted permanently due to 3rd party copyright restrictions. Restriction set on 15/04/2019 by AS, Doctoral CollegeThe areal density growth of magnetic recording systems is fast approaching the superparamagnetic limit for conventional magnetic disks. This is due to the increasing demand for high data storage capacity. Two-dimensional Magnetic Recording (TDMR) is a new technology aimed at increasing the areal density of magnetic recording systems beyond the limit of current disk technology using conventional disk media. However, it relies on advanced coding and signal processing techniques to achieve areal density gains. Current state of the art signal processing for TDMR channel employed iterative decoding with Low Density Parity Check (LDPC) codes, coupled with 2D equalisers and full 2D Maximum Likelihood (ML) detectors. The shortcoming of these algorithms is their computation complexity especially with regards to the ML detectors which is exponential with respect to the number of bits involved. Therefore, robust low-complexity coding, equalisation and detection algorithms are crucial for successful future deployment of the TDMR scheme. This present work is aimed at finding efficient and low-complexity coding, equalisation, detection and decoding techniques for improving the performance of TDMR channel and magnetic recording channel in general. A forward error correction (FEC) scheme of two concatenated single parity bit systems along track separated by an interleaver has been presented for channel with perpendicular magnetic recording (PMR) media. Joint detection decoding algorithm using constrained MAP detector for simultaneous detection and decoding of data with single parity bit system has been proposed. It is shown that using the proposed FEC scheme with the constrained MAP detector/decoder can achieve a gain of up to 3dB over un-coded MAP decoder for 1D interference channel. A further gain of 1.5 dB was achieved by concatenating two interleavers with extra parity bit when data density along track is high. The use of single bit parity code as a run length limited code as well as an error correction code is demonstrated to simplify detection complexity and improve system performance. A low-complexity 2D detection technique for TDMR system with Shingled Magnetic Recording Media (SMR) was also proposed. The technique used the concatenation of 2D MAP detector along track with regular MAP detector across tracks to reduce the complexity order of using full 2D detection from exponential to linear. It is shown that using this technique can improve track density with limited complexity. Two methods of FEC for TDMR channel using two single parity bit systems have been discussed. One using two concatenated single parity bits along track only, separated by a Dithered Relative Prime (DRP) interleaver and the other use the single parity bits in both directions without the DRP interleaver. Consequent to the FEC coding on the channel, a 2D multi-track MAP joint detector decoder has been proposed for simultaneous detection and decoding of the coded single parity bit data. A gain of up to 5dB was achieved using the FEC scheme with the 2D multi-track MAP joint detector decoder over un-coded 2D multi-track MAP detector in TDMR channel. In a situation with high density in both directions, it is shown that FEC coding using two concatenated single parity bits along track separated by DRP interleaver performed better than when the single parity bits are used in both directions without the DRP interleaver.9mobile Nigeri

    Proceedings of the 2009 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    The joint workshop of the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB, Karlsruhe, and the Vision and Fusion Laboratory (Institute for Anthropomatics, Karlsruhe Institute of Technology (KIT)), is organized annually since 2005 with the aim to report on the latest research and development findings of the doctoral students of both institutions. This book provides a collection of 16 technical reports on the research results presented on the 2009 workshop
    • …
    corecore