49 research outputs found

    Efficient Lattice Decoders for the Linear Gaussian Vector Channel: Performance & Complexity Analysis

    Get PDF
    The theory of lattices --- a mathematical approach for representing infinite discrete points in Euclidean space, has become a powerful tool to analyze many point-to-point digital and wireless communication systems, particularly, communication systems that can be well-described by the linear Gaussian vector channel model. This is mainly due to the three facts about channel codes constructed using lattices: they have simple structure, their ability to achieve the fundamental limits (the capacity) of the channel, and most importantly, they can be decoded using efficient decoders called lattice decoders. Since its introduction to multiple-input multiple-output (MIMO) wireless communication systems, sphere decoders has become an attractive efficient implementation of lattice decoders, especially for small signal dimensions and/or moderate to large signal-to-noise ratios (SNRs). In the first part of this dissertation, we consider sphere decoding algorithms that describe lattice decoding. The exact complexity analysis of the basic sphere decoder for general space-time codes applied to MIMO wireless channel is known to be difficult. Characterizing and understanding the complexity distribution is important, especially when the sphere decoder is used under practically relevant runtime constraints. In this work, we shed the light on the (average) computational complexity of sphere decoding for the quasi-static, LAttice Space-Time (LAST) coded MIMO channel. Sphere decoders are only efficient in the high SNR regime and low signal dimensions, and exhibits exponential (average) complexity for low-to-moderate SNR and large signal dimensions. On the other extreme, linear and non-linear receivers such as minimum mean-square error (MMSE), and MMSE decision-feedback equalization (DFE) are considered attractive alternatives to sphere decoders in MIMO channels. Unfortunately, the very low decoding complexity advantage that these decoders can provide comes at the expense of poor performance, especially for large signal dimensions. The problem of designing low complexity receivers for the MIMO channel that achieve near-optimal performance is considered a challenging problem and has driven much research in the past years. The problem can solved through the use of lattice sequential decoding that is capable of bridging the gap between sphere decoders and low complexity linear decoders (e.g., MMSE-DFE decoder). In the second part of this thesis, the asymptotic performance of the lattice sequential decoder for LAST coded MIMO channel is analyzed. We determine the rates achievable by lattice coding and sequential decoding applied to such a channel. The diversity-multiplexing tradeoff under such a decoder is derived as a function of its parameter--- the bias term. In this work, we analyze both the computational complexity distribution and the average complexity of such a decoder in the high SNR regime. We show that there exists a cut-off multiplexing gain for which the average computational complexity of the decoder remains bounded. Our analysis reveals that there exists a finite probability that the number of computations performed by the decoder may become excessive, even at high SNR, during high channel noise. This probability is usually referred to as the probability of a decoding failure. Such probability limits the performance of the lattice sequential decoder, especially for a one-way communication system. For a two-way communication system, such as in MIMO Automatic Repeat reQuest (ARQ) system, the feedback channel can be used to eliminate the decoding failure probability. In this work, we modify the lattice sequential decoder for the MIMO ARQ channel, to predict in advance the occurrence of decoding failure to avoid wasting the time trying to decode the message. This would result in a huge saving in decoding complexity. In particular, we will study the throughput-performance-complexity tradeoffs in sequential decoding algorithms and the effect of preprocessing and termination strategies. We show, analytically and via simulation, that using the lattice sequential decoder that implements a simple yet efficient time-out algorithm for joint error detection and correction, the optimal tradeoff of the MIMO ARQ channel can be achieved with significant reduction in decoding complexity

    Soft-output detection for transit antenna index modulation-based schemes.

    Get PDF
    Master of Sciences in Electronic Engineering. University of KwaZulu-Natal, Durban 2016.Abstract available in PDF file

    Computational problems of analysis of short next generation sequencing reads

    Get PDF
    Short read next generation sequencing (NGS) has significant impacts on modern genomics, genetics, cell biology and medicine, especially on meta-genomics, comparative genomics, polymorphism detection, mutation screening, transcriptome profiling, methylation profiling, chromatin remodelling and many more applications. However, NGS are prone for errors which complicate scientific conclusions. NGS technologies consist of shearing DNA molecules into collection of numerous small fragments, called a ‘library’, and their further extensive parallel sequencing. These sequenced overlapping fragments are called ‘reads’, they are assembled into contiguous strings. The contiguous sequences are in turn assembled into genomes for further analysis. Computational sequencing problems are those arising from numerical processing of sequenced samples. The numerical processing involves procedures such as: quality-scoring, mapping/assembling, and surprisingly, error-correction of a data. This paper is reviewing post-processing errors and computational methods to discern them. It also includes sequencing dictionary. We present here quality control of raw data, errors arising at the steps of alignment of sequencing reads to a reference genome and assembly. Finally this work presents identification of mutations (“Variant calling”) in sequencing data and its quality control

    Computational Approaches for Predicting Drug Targets

    Get PDF
    This thesis reports the development of several computational approaches to predict human disease proteins and to assess their value as drug targets, using in-house domain functional families (CATH FunFams). CATH-FunFams comprise evolutionary related protein domains with high structural and functional similarity. External resources were used to identify proteins associated with disease and their genetic variations. These were then mapped to the CATH-FunFams together with information on drugs bound to any relatives within the FunFam. A number of novel approaches were then used to predict the proteins likely to be driving disease and to assess whether drugs could be repurposed within the FunFams for targeting these putative driver proteins. The first work chapter of this thesis reports the mapping of drugs to CATHFunFams to identify druggable FunFams based on statistical overrepresentation of drug targets within the FunFam. 81 druggable CATH-FunFams were identified and the dispersion of their relatives on a human protein interaction network was analysed to assess their propensity to be associated with side effects. In the second work chapter, putative drug targets for bladder cancer were identified using a novel computational protocol that expands a set of known bladder cancer genes with genes highly expressed in bladder cancer and highly associated with known bladder cancer genes in a human protein interaction network. 35 new bladder cancer targets were identified in druggable FunFams, for some of which FDA approved drugs could be repurposed from other protein domains in the FunFam. In the final work chapter, protein kinases and kinase inhibitors were analysed. These are an important class of human drug targets. A novel classification protocol was applied to give a comprehensive classification of the kinases which was benchmarked and compared with other widely used kinase classifications. Druginformation from ChEMBL was mapped to the Kinase-FunFams and analyses of protein network characteristics of the kinase relatives in each FunFam used to identify those families likely to be associated with side effects

    Bioinformatics of genome evolution: from ancestral to modern metabolism

    Get PDF
    Bioinformatics, that is the interdisciplinary field that blends computer science and biostatistics with biological and biomedical sciences, is expected to gain a central role in next feature. Indeed, it has now affected several fields of biology, providing crucial hints for the understanding of biological systems and also allowing a more accurate design of wet lab experiments. In this work, the analysis of sequence data has be used in different fields, such as evolution (e.g. the assembly and evolution of metabolism), infections control (e.g. the horizontal flow of antibiotic resistance), ecology (bacterial bioremediation)

    Nevada Test Site-Directed Research and Development: FY 2006 Report

    Full text link
    corecore