38 research outputs found

    The Error-Pattern-Correcting Turbo Equalizer

    Full text link
    The error-pattern correcting code (EPCC) is incorporated in the design of a turbo equalizer (TE) with aim to correct dominant error events of the inter-symbol interference (ISI) channel at the output of its matching Viterbi detector. By targeting the low Hamming-weight interleaved errors of the outer convolutional code, which are responsible for low Euclidean-weight errors in the Viterbi trellis, the turbo equalizer with an error-pattern correcting code (TE-EPCC) exhibits a much lower bit-error rate (BER) floor compared to the conventional non-precoded TE, especially for high rate applications. A maximum-likelihood upper bound is developed on the BER floor of the TE-EPCC for a generalized two-tap ISI channel, in order to study TE-EPCC's signal-to-noise ratio (SNR) gain for various channel conditions and design parameters. In addition, the SNR gain of the TE-EPCC relative to an existing precoded TE is compared to demonstrate the present TE's superiority for short interleaver lengths and high coding rates.Comment: This work has been submitted to the special issue of the IEEE Transactions on Information Theory titled: "Facets of Coding Theory: from Algorithms to Networks". This work was supported in part by the NSF Theoretical Foundation Grant 0728676

    Capacity-Achieving Coding Mechanisms: Spatial Coupling and Group Symmetries

    Get PDF
    The broad theme of this work is in constructing optimal transmission mechanisms for a wide variety of communication systems. In particular, this dissertation provides a proof of threshold saturation for spatially-coupled codes, low-complexity capacity-achieving coding schemes for side-information problems, a proof that Reed-Muller and primitive narrow-sense BCH codes achieve capacity on erasure channels, and a mathematical framework to design delay sensitive communication systems. Spatially-coupled codes are a class of codes on graphs that are shown to achieve capacity universally over binary symmetric memoryless channels (BMS) under belief-propagation decoder. The underlying phenomenon behind spatial coupling, known as “threshold saturation via spatial coupling”, turns out to be general and this technique has been applied to a wide variety of systems. In this work, a proof of the threshold saturation phenomenon is provided for irregular low-density parity-check (LDPC) and low-density generator-matrix (LDGM) ensembles on BMS channels. This proof is far simpler than published alternative proofs and it remains as the only technique to handle irregular and LDGM codes. Also, low-complexity capacity-achieving codes are constructed for three coding problems via spatial coupling: 1) rate distortion with side-information, 2) channel coding with side-information, and 3) write-once memory system. All these schemes are based on spatially coupling compound LDGM/LDPC ensembles. Reed-Muller and Bose-Chaudhuri-Hocquengham (BCH) are well-known algebraic codes introduced more than 50 years ago. While these codes are studied extensively in the literature it wasn’t known whether these codes achieve capacity. This work introduces a technique to show that Reed-Muller and primitive narrow-sense BCH codes achieve capacity on erasure channels under maximum a posteriori (MAP) decoding. Instead of relying on the weight enumerators or other precise details of these codes, this technique requires that these codes have highly symmetric permutation groups. In fact, any sequence of linear codes with increasing blocklengths whose rates converge to a number between 0 and 1, and whose permutation groups are doubly transitive achieve capacity on erasure channels under bit-MAP decoding. This pro-vides a rare example in information theory where symmetry alone is sufficient to achieve capacity. While the channel capacity provides a useful benchmark for practical design, communication systems of the day also demand small latency and other link layer metrics. Such delay sensitive communication systems are studied in this work, where a mathematical framework is developed to provide insights into the optimal design of these systems

    Practical interference management strategies in Gaussian networks

    Get PDF
    Increasing demand for bandwidth intensive activities on high-penetration wireless hand-held personal devices, combined with their processing power and advanced radio features, has necessitated a new look at the problems of resource provisioning and distributed management of coexistence in wireless networks. Information theory, as the science of studying the ultimate limits of communication e ciency, plays an important role in outlining guiding principles in the design and analysis of such communication schemes. Network information theory, the branch of information theory that investigates problems of multiuser and distributed nature in information transmission is ideally poised to answer questions about the design and analysis of multiuser communication systems. In the past few years, there have been major advances in network information theory, in particular in the generalized degrees of freedom framework for asymptotic analysis and interference alignment which have led to constant gap to capacity results for Gaussian interference channels. Unfortunately, practical adoption of these results has been slowed by their reliance on unrealistic assumptions like perfect channel state information at the transmitter and intricate constructions based on alignment over transcendental dimensions of real numbers. It is therefore necessary to devise transmission methods and coexistence schemes that fall under the umbrella of existing interference management and cognitive radio toolbox and deliver close to optimal performance. In this thesis we work on the theme of designing and characterizing the performance of conceptually simple transmission schemes that are robust and achieve performance that is close to optimal. In particular, our work is broadly divided into two parts. In the rst part, looking at cognitive radio networks, we seek to relax the assumption of non-causal knowledge of primary user's message at the secondary user's transmitter. We study a cognitive channel model based on Gaussian interference channel that does not assume anything about users other than primary user's priority over secondary user in reaching its desired quality of service. We characterize this quality of service requirement as a minimum rate that the primary user should be able to achieve. Studying the achievable performance of simple encoding and decoding schemes in this scenario, we propose a few di erent simple encoding schemes and explore di erent decoder designs. We show that surprisingly, all these schemes achieve the same rate region. Next, we study the problem of rate maximization faced by the secondary user subject to primary's QoS constraint. We show that this problem is not convex or smooth in general. We then use the symmetry properties of the problem to reduce its solution to a feasibly implementable line search. We also provide numerical results to demonstrate the performance of the scheme. Continuing on the theme of simple yet well-performing schemes for wireless networks, in the second part of the thesis, we direct our attention from two-user cognitive networks to the problem of smart interference management in large wireless networks. Here, we study the problem of interference-aware wireless link scheduling. Link scheduling is the problem of allocating a set of transmission requests into as small a set of time slots as possible such that all transmissions satisfy some condition of feasibility. The feasibility criterion has traditionally been lack of pair of links that interfere too much. This makes the problem amenable to solution using graph theoretical tools. Inspired by the recent results that the simple approach of treating interference as noise achieves maximal Generalized Degrees of Freedom (which is a measure that roughly captures how many equivalent single-user channels are contained in a given multi-user channel) and the generalization that it can attain rates within a constant gap of the capacity for a large class of Gaussian interference networks, we study the problem of scheduling links under a set Signal to Interference plus Noise Ratio (SINR) constraint. We show that for nodes distributed in a metric space and obeying path loss channel model, a re ned framework based on combining geometric and graph theoretic results can be devised to analyze the problem of nding the feasible sets of transmissions for a given level of desired SINR. We use this general framework to give a link scheduling algorithm that is provably within a logarithmic factor of the best possible schedule. Numerical simulations con rm that this approach outperforms other recently proposed SINR-based approaches. Finally, we conclude by identifying open problems and possible directions for extending these results

    Codificación para corrección de errores con aplicación en sistemas de transmisión y almacenamiento de información

    Get PDF
    Tesis (DCI)--FCEFN-UNC, 2013Trata de una técnica de diseño de códigos de chequeo de paridad de baja densidad ( más conocidas por sigla en ingles como LDPC) y un nuevo algoritmo de post- procesamiento para la reducción del piso de erro

    Proceedings of the Eindhoven FASTAR Days 2004 : Eindhoven, The Netherlands, September 3-4, 2004

    Get PDF
    The Eindhoven FASTAR Days (EFD) 2004 were organized by the Software Construction group of the Department of Mathematics and Computer Science at the Technische Universiteit Eindhoven. On September 3rd and 4th 2004, over thirty participants|hailing from the Czech Republic, Finland, France, The Netherlands, Poland and South Africa|gathered at the Department to attend the EFD. The EFD were organized in connection with the research on finite automata by the FASTAR Research Group, which is centered in Eindhoven and at the University of Pretoria, South Africa. FASTAR (Finite Automata Systems|Theoretical and Applied Research) is an in- ternational research group that aims to lead in all areas related to finite state systems. The work in FASTAR includes both core and applied parts of this field. The EFD therefore focused on the field of finite automata, with an emphasis on practical aspects and applications. Eighteen presentations, mostly on subjects within this field, were given, by researchers as well as students from participating universities and industrial research facilities. This report contains the proceedings of the conference, in the form of papers for twelve of the presentations at the EFD. Most of them were initially reviewed and distributed as handouts during the EFD. After the EFD took place, the papers were revised for publication in these proceedings. We would like to thank the participants for their attendance and presentations, making the EFD 2004 as successful as they were. Based on this success, it is our intention to make the EFD into a recurring event. Eindhoven, December 2004 Loek Cleophas Bruce W. Watso

    Proceedings of the Eindhoven FASTAR Days 2004 : Eindhoven, The Netherlands, September 3-4, 2004

    Get PDF
    The Eindhoven FASTAR Days (EFD) 2004 were organized by the Software Construction group of the Department of Mathematics and Computer Science at the Technische Universiteit Eindhoven. On September 3rd and 4th 2004, over thirty participants|hailing from the Czech Republic, Finland, France, The Netherlands, Poland and South Africa|gathered at the Department to attend the EFD. The EFD were organized in connection with the research on finite automata by the FASTAR Research Group, which is centered in Eindhoven and at the University of Pretoria, South Africa. FASTAR (Finite Automata Systems|Theoretical and Applied Research) is an in- ternational research group that aims to lead in all areas related to finite state systems. The work in FASTAR includes both core and applied parts of this field. The EFD therefore focused on the field of finite automata, with an emphasis on practical aspects and applications. Eighteen presentations, mostly on subjects within this field, were given, by researchers as well as students from participating universities and industrial research facilities. This report contains the proceedings of the conference, in the form of papers for twelve of the presentations at the EFD. Most of them were initially reviewed and distributed as handouts during the EFD. After the EFD took place, the papers were revised for publication in these proceedings. We would like to thank the participants for their attendance and presentations, making the EFD 2004 as successful as they were. Based on this success, it is our intention to make the EFD into a recurring event. Eindhoven, December 2004 Loek Cleophas Bruce W. Watso

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    Marine Toxins from Harmful Algae and Seafood Safety

    Get PDF
    The rapid expansion of aquaculture around the world is increasingly being impacted by toxins produced by harmful marine microalgae, which threaten the safety of seafood. In addition, ocean climate change is leading to changing patterns in the distribution of toxic dinoflagellates and diatoms which produce these toxins. New approaches are being developed to monitor for harmful species and the toxins they produce. This Special Issue covers pioneering research on harmful marine microalgae and their toxins, including the identification of species and toxins; the development of new chemical and biological techniques to identify and monitor species and toxins; the uptake of marine biotoxins in seafood and marine ecosystems; and the distribution and abundance of toxins, particularly in relation to climate change

    Diffeomorphic Transformations for Time Series Analysis: An Efficient Approach to Nonlinear Warping

    Full text link
    The proliferation and ubiquity of temporal data across many disciplines has sparked interest for similarity, classification and clustering methods specifically designed to handle time series data. A core issue when dealing with time series is determining their pairwise similarity, i.e., the degree to which a given time series resembles another. Traditional distance measures such as the Euclidean are not well-suited due to the time-dependent nature of the data. Elastic metrics such as dynamic time warping (DTW) offer a promising approach, but are limited by their computational complexity, non-differentiability and sensitivity to noise and outliers. This thesis proposes novel elastic alignment methods that use parametric \& diffeomorphic warping transformations as a means of overcoming the shortcomings of DTW-based metrics. The proposed method is differentiable \& invertible, well-suited for deep learning architectures, robust to noise and outliers, computationally efficient, and is expressive and flexible enough to capture complex patterns. Furthermore, a closed-form solution was developed for the gradient of these diffeomorphic transformations, which allows an efficient search in the parameter space, leading to better solutions at convergence. Leveraging the benefits of these closed-form diffeomorphic transformations, this thesis proposes a suite of advancements that include: (a) an enhanced temporal transformer network for time series alignment and averaging, (b) a deep-learning based time series classification model to simultaneously align and classify signals with high accuracy, (c) an incremental time series clustering algorithm that is warping-invariant, scalable and can operate under limited computational and time resources, and finally, (d) a normalizing flow model that enhances the flexibility of affine transformations in coupling and autoregressive layers.Comment: PhD Thesis, defended at the University of Navarra on July 17, 2023. 277 pages, 8 chapters, 1 appendi
    corecore