229 research outputs found

    Iterative Decoding of Trellis-Constrained Codes inspired by Amplitude Amplification (Preliminary Version)

    Full text link
    We propose a decoder for Trellis-Constrained Codes, a super-class of Turbo- and LDPC codes. Inspired by amplitude amplification from quantum computing, we attempt to amplify the relative likelihood of the most likely codeword until it stands out from all other codewords

    Reconfigurable architectures for beyond 3G wireless communication systems

    Get PDF

    Exploratory multivariate longitudinal data analysis and models for multivariate longitudinal binary data

    Get PDF
    Longitudinal data occurs when repeated measurements from the same subject are observed over time. In this thesis, exploratory data analysis and models are utilized jointly to analyze longitudinal data which leads to stronger and better justified conclusions. The complex structure of longitudinal data with covariates requires new visual methods that enable interactive exploration. Here we catalog the general principles of exploratory data analysis for multivariate longitudinal data, and illustrate the use of the linked brushing approach for studying the mean structure over time. It is possible to reveal the unexpected, to explore the interaction between responses and covariates, to observe the individual variations, understand structure in multiple dimensions, and diagnose and fix models by using these methods. We also propose models for multivariate longitudinal binary data that directly model marginal covariate effects while accounting for the dependence across time via a transition structure and across responses within a subject for a given time via random effects. Markov Chain Monte Carlo Methods, specifically Gibbs sampling with Hybrid steps, are used to sample from the posterior distribution of parameters. Graphical and quantitative checks are used to assess model fit. The methods are illustrated on several real datasets, primarily the Iowa Youth and Families Project.*;*This dissertation is a compound document (contains both a paper copy and a CD as part of the dissertation)

    Measurement based fault tolerant error correcting quantum codes on foliated cluster states

    Get PDF

    Origin of the torsional oscillation pattern of solar rotation

    Full text link
    A model is presented that explains the `torsional oscillation' pattern of deviations in the solar rotation rate as a geostrophic flow. The flow is driven by temperature variations near the surface due to the enhanced emission of radiation by the small scale magnetic field. The model explains the sign of the flow, its amplitude and the fact that the maxima occur near the boundaries of the main activity belts. The amplitude of the flow decreases with depth from its maximum at the surface but penetrates over much of the depth of the convection zone, in agreement with the data from helioseismology. It predicts that the flow is axisymmetric only on average, and in reality consists of a superposition of circulations around areas of enhanced magnetic activity. It must be accompanied by a meridional flow component, which declines more rapidly with depth.Comment: Expanded version, as accepted by Solar Physics. Fig1 is in colo

    Network Training for Continuous Speech Recognition

    Get PDF
    Spoken language processing is one of the oldest and most natural modes of information exchange between humans beings. For centuries, people have tried to develop machines that can understand and produce speech the way humans do so naturally. The biggest problem in our inability to model speech with computer programs and mathematics results from the fact that language is instinctive, whereas, the vocabulary and dialect used in communication are learned. Human beings are genetically equipped with the ability to learn languages, and culture imprints the vocabulary and dialect on each member of society. This thesis examines the role of pattern classification in the recognition of human speech, i.e., machine learning techniques that are currently being applied to the spoken language processing problem. The primary objective of this thesis is to create a network training paradigm that allows for direct training of multi-path models and alleviates the need for complicated systems and training recipes. A traditional trainer uses an expectation maximization (EM)based supervised training framework to estimate the parameters of a spoken language processing system. EM-based parameter estimation for speech recognition is performed using several complicated stages of iterative reestimation. These stages typically are prone to human error. The network training paradigm reduces the complexity of the training process while retaining the robustness of the EM-based supervised training framework. The hypothesis of this thesis is that the network training paradigm can achieve comparable recognition performance to a traditional trainer while alleviating the need for complicated systems and training recipes for spoken language processing systems

    Direct-form adaptive equalization for underwater acoustic communication

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Master of Science at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2012Adaptive equalization is an important aspect of communication systems in various environments. It is particularly important in underwater acoustic communication systems, as the channel has a long delay spread and is subject to the effects of time- varying multipath fading and Doppler spreading. The design of the adaptation algorithm has a profound influence on the performance of the system. In this thesis, we explore this aspect of the system. The emphasis of the work presented is on applying concepts from inference and decision theory and information theory to provide an approach to deriving and analyzing adaptation algorithms. Limited work has been done so far on rigorously devising adaptation algorithms to suit a particular situation, and the aim of this thesis is to concretize such efforts and possibly to provide a mathematical basis for expanding it to other applications. We derive an algorithm for the adaptation of the coefficients of an equalizer when the receiver has limited or no information about the transmitted symbols, which we term the Soft-Decision Directed Recursive Least Squares algorithm. We will demonstrate connections between the Expectation-Maximization (EM) algorithm and the Recursive Least Squares algorithm, and show how to derive a computationally efficient, purely recursive algorithm from the optimal EM algorithm. Then, we use our understanding of Markov processes to analyze the performance of the RLS algorithm in hard-decision directed mode, as well as of the Soft-Decision Directed RLS algorithm. We demonstrate scenarios in which the adaptation procedures fail catastrophically, and discuss why this happens. The lessons from the analysis guide us on the choice of models for the adaptation procedure. We then demonstrate how to use the algorithm derived in a practical system for underwater communication using turbo equalization. As the algorithm naturally incorporates soft information into the adaptation process, it becomes easy to fit it into a turbo equalization framework. We thus provide an instance of how to use the information of a turbo equalizer in an adaptation procedure, which has not been very well explored in the past. Experimental data is used to prove the value of the algorithm in a practical context.Support from the agencies that funded this research- the Academic Programs Office at WHOI and the Office of Naval Research (through ONR Grant #N00014-07-10738 and #N00014-10-10259)

    Trellis Decoding And Applications For Quantum Error Correction

    Get PDF
    Compact, graphical representations of error-correcting codes called trellises are a crucial tool in classical coding theory, establishing both theoretical properties and performance metrics for practical use. The idea was extended to quantum error-correcting codes by Ollivier and Tillich in 2005. Here, we use their foundation to establish a practical decoder able to compute the maximum-likely error for any stabilizer code over a finite field of prime dimension. We define a canonical form for the stabilizer group and use it to classify the internal structure of the graph. Similarities and differences between the classical and quantum theories are discussed throughout. Numerical results are presented which match or outperform current state-of-the-art decoding techniques. New construction techniques for large trellises are developed and practical implementations discussed. We then define a dual trellis and use algebraic graph theory to solve the maximum-likely coset problem for any stabilizer code over a finite field of prime dimension at minimum added cost. Classical trellis theory makes occasional theoretical use of a graph product called the trellis product. We establish the relationship between the trellis product and the standard graph products and use it to provide a closed form expression for the resulting graph, allowing it to be used in practice. We explore its properties and classify all idempotents. The special structure of the trellis allows us to present a factorization procedure for the product, which is much simpler than that of the standard products. Finally, we turn to an algorithmic study of the trellis and explore what coding-theoretic information can be extracted assuming no other information about the code is available. In the process, we present a state-of-the-art algorithm for computing the minimum distance for any stabilizer code over a finite field of prime dimension. We also define a new weight enumerator for stabilizer codes over F_2 incorporating the phases of each stabilizer and provide a trellis-based algorithm to compute it.Ph.D

    Metrics to evaluate compressions algorithms for RAW SAR data

    Get PDF
    Modern synthetic aperture radar (SAR) systems have size, weight, power and cost (SWAP-C) limitations since platforms are becoming smaller, while SAR operating modes are becoming more complex. Due to the computational complexity of the SAR processing required for modern SAR systems, performing the processing on board the platform is not a feasible option. Thus, SAR systems are producing an ever-increasing volume of data that needs to be transmitted to a ground station for processing. Compression algorithms are utilised to reduce the data volume of the raw data. However, these algorithms can cause degradation and losses that may degrade the effectiveness of the SAR mission. This study addresses the lack of standardised quantitative performance metrics to objectively quantify the performance of SAR data-compression algorithms. Therefore, metrics were established in two different domains, namely the data domain and the image domain. The data-domain metrics are used to determine the performance of the quantisation and the associated losses or errors it induces in the raw data samples. The image-domain metrics evaluate the quality of the SAR image after SAR processing has been performed. In this study three well-known SAR compression algorithms were implemented and applied to three real SAR data sets that were obtained from a prototype airborne SAR system. The performance of these algorithms were evaluated using the proposed metrics. Important metrics in the data domain were found to be the compression ratio, the entropy, statistical parameters like the skewness and kurtosis to measure the deviation from the original distributions of the uncompressed data, and the dynamic range. The data histograms are an important visual representation of the effects of the compression algorithm on the data. An important error measure in the data domain is the signal-to-quantisation-noise ratio (SQNR), and the phase error for applications where phase information is required to produce the output. Important metrics in the image domain include the dynamic range, the impulse response function, the image contrast, as well as the error measure, signal-to-distortion-noise ratio (SDNR). The metrics suggested that all three algorithms performed well and are thus well suited for the compression of raw SAR data. The fast Fourier transform block adaptive quantiser (FFT-BAQ) algorithm had the overall best performance, but the analysis of the computational complexity of its compression steps, indicated that it is has the highest level of complexity compared to the other two algorithms. Since different levels of degradation are acceptable for different SAR applications, a trade-off can be made between the data reduction and the degradation caused by the algorithm. Due to SWAP-C limitations, there also remains a trade-off between the performance and the computational complexity of the compression algorithm.Dissertation (MEng)--University of Pretoria, 2019.Electrical, Electronic and Computer EngineeringMEngUnrestricte
    • …
    corecore