5,445 research outputs found

    Maximum likelihood sequence estimation from the lattice viewpoint.

    Get PDF
    by Mow Wai Ho.Thesis (M.Phil.)--Chinese University of Hong Kong, 1991.Bibliographies: leaves 98-104.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Channel Model and Other Basic Assumptions --- p.5Chapter 1.2 --- Complexity Measure --- p.8Chapter 1.3 --- Maximum Likelihood Sequence Estimator --- p.9Chapter 1.4 --- The Viterbi Algorithm ´ؤ An Implementation of MLSE --- p.11Chapter 1.5 --- Error Performance of the Viterbi Algorithm --- p.14Chapter 1.6 --- Suboptimal Viterbi-like Algorithms --- p.17Chapter 1.7 --- Trends of Digital Transmission and MLSE --- p.19Chapter 2 --- New Formulation of MLSE --- p.21Chapter 2.1 --- The Truncated Viterbi Algorithm --- p.21Chapter 2.2 --- Choice of Truncation Depth --- p.23Chapter 2.3 --- Decomposition of MLSE --- p.26Chapter 2.4 --- Lattice Interpretation of MLSE --- p.29Chapter 3 --- The Closest Vector Problem --- p.34Chapter 3.1 --- Basic Definitions and Facts About Lattices --- p.37Chapter 3.2 --- Lattice Basis Reduction --- p.40Chapter 3.2.1 --- Weakly Reduced Bases --- p.41Chapter 3.2.2 --- Derivation of the LLL-reduction Algorithm --- p.43Chapter 3.2.3 --- Improved Algorithm for LLL-reduced Bases --- p.52Chapter 3.3 --- Enumeration Algorithm --- p.57Chapter 3.3.1 --- Lattice and Isometric Mapping --- p.58Chapter 3.3.2 --- Enumerating Points in a Parallelepiped --- p.59Chapter 3.3.3 --- Enumerating Points in a Cube --- p.63Chapter 3.3.4 --- Enumerating Points in a Sphere --- p.64Chapter 3.3.5 --- Comparisons of Three Enumeration Algorithms --- p.66Chapter 3.3.6 --- Improved Enumeration Algorithm for the CVP and the SVP --- p.67Chapter 3.4 --- CVP Algorithm Using the Reduce-and-Enumerate Approach --- p.71Chapter 3.5 --- CVP Algorithm with Improved Average-Case Complexity --- p.72Chapter 3.5.1 --- CVP Algorithm for Norms Induced by Orthogonalization --- p.73Chapter 3.5.2 --- Improved CVP Algorithm using Norm Approximation --- p.76Chapter 4 --- MLSE Algorithm --- p.79Chapter 4.1 --- MLSE Algorithm for PAM Systems --- p.79Chapter 4.2 --- MLSE Algorithm for Unimodular Channel --- p.82Chapter 4.3 --- Reducing the Boundary Effect for PAM Systems --- p.83Chapter 4.4 --- Simulation Results and Performance Investigation for Example Channels --- p.86Chapter 4.5 --- MLSE Algorithm for Other Lattice-Type Modulation Systems --- p.91Chapter 4.6 --- Some Potential Applications --- p.92Chapter 4.7 --- Further Research Directions --- p.94Chapter 5 --- Conclusion --- p.96Bibliography --- p.10

    Statistical Pruning for Near-Maximum Likelihood Decoding

    Get PDF
    In many communications problems, maximum-likelihood (ML) decoding reduces to finding the closest (skewed) lattice point in N-dimensions to a given point xisin CN. In its full generality, this problem is known to be NP-complete. Recently, the expected complexity of the sphere decoder, a particular algorithm that solves the ML problem exactly, has been computed. An asymptotic analysis of this complexity has also been done where it is shown that the required computations grow exponentially in N for any fixed SNR. At the same time, numerical computations of the expected complexity show that there are certain ranges of rates, SNRs and dimensions N for which the expected computation (counted as the number of scalar multiplications) involves no more than N3 computations. However, when the dimension of the problem grows too large, the required computations become prohibitively large, as expected from the asymptotic exponential complexity. In this paper, we propose an algorithm that, for large N, offers substantial computational savings over the sphere decoder, while maintaining performance arbitrarily close to ML. We statistically prune the search space to a subset that, with high probability, contains the optimal solution, thereby reducing the complexity of the search. Bounds on the error performance of the new method are proposed. The complexity of the new algorithm is analyzed through an upper bound. The asymptotic behavior of the upper bound for large N is also analyzed which shows that the upper bound is also exponential but much lower than the sphere decoder. Simulation results show that the algorithm is much more efficient than the original sphere decoder for smaller dimensions as well, and does not sacrifice much in terms of performance

    Sphere-constrained ML detection for frequency-selective channels

    Get PDF
    The maximum-likelihood (ML) sequence detection problem for channels with memory is investigated. The Viterbi algorithm (VA) provides an exact solution. Its computational complexity is linear in the length of the transmitted sequence, but exponential in the channel memory length. On the other hand, the sphere decoding (SD) algorithm also solves the ML detection problem exactly, and has expected complexity which is a low-degree polynomial (often cubic) in the length of the transmitted sequence over a wide range of signal-to-noise ratios. We combine the sphere-constrained search strategy of SD with the dynamic programming principles of the VA. The resulting algorithm has the worst-case complexity determined by the VA, but often significantly lower expected complexity

    On the sphere-decoding algorithm II. Generalizations, second-order statistics, and applications to communications

    Get PDF
    In Part 1, we found a closed-form expression for the expected complexity of the sphere-decoding algorithm, both for the infinite and finite lattice. We continue the discussion in this paper by generalizing the results to the complex version of the problem and using the expected complexity expressions to determine situations where sphere decoding is practically feasible. In particular, we consider applications of sphere decoding to detection in multiantenna systems. We show that, for a wide range of signal-to-noise ratios (SNRs), rates, and numbers of antennas, the expected complexity is polynomial, in fact, often roughly cubic. Since many communications systems operate at noise levels for which the expected complexity turns out to be polynomial, this suggests that maximum-likelihood decoding, which was hitherto thought to be computationally intractable, can, in fact, be implemented in real-time-a result with many practical implications. To provide complexity information beyond the mean, we derive a closed-form expression for the variance of the complexity of sphere-decoding algorithm in a finite lattice. Furthermore, we consider the expected complexity of sphere decoding for channels with memory, where the lattice-generating matrix has a special Toeplitz structure. Results indicate that the expected complexity in this case is, too, polynomial over a wide range of SNRs, rates, data blocks, and channel impulse response lengths

    Nonparametric Bayesian Double Articulation Analyzer for Direct Language Acquisition from Continuous Speech Signals

    Full text link
    Human infants can discover words directly from unsegmented speech signals without any explicitly labeled data. In this paper, we develop a novel machine learning method called nonparametric Bayesian double articulation analyzer (NPB-DAA) that can directly acquire language and acoustic models from observed continuous speech signals. For this purpose, we propose an integrative generative model that combines a language model and an acoustic model into a single generative model called the "hierarchical Dirichlet process hidden language model" (HDP-HLM). The HDP-HLM is obtained by extending the hierarchical Dirichlet process hidden semi-Markov model (HDP-HSMM) proposed by Johnson et al. An inference procedure for the HDP-HLM is derived using the blocked Gibbs sampler originally proposed for the HDP-HSMM. This procedure enables the simultaneous and direct inference of language and acoustic models from continuous speech signals. Based on the HDP-HLM and its inference procedure, we developed a novel double articulation analyzer. By assuming HDP-HLM as a generative model of observed time series data, and by inferring latent variables of the model, the method can analyze latent double articulation structure, i.e., hierarchically organized latent words and phonemes, of the data in an unsupervised manner. The novel unsupervised double articulation analyzer is called NPB-DAA. The NPB-DAA can automatically estimate double articulation structure embedded in speech signals. We also carried out two evaluation experiments using synthetic data and actual human continuous speech signals representing Japanese vowel sequences. In the word acquisition and phoneme categorization tasks, the NPB-DAA outperformed a conventional double articulation analyzer (DAA) and baseline automatic speech recognition system whose acoustic model was trained in a supervised manner.Comment: 15 pages, 7 figures, Draft submitted to IEEE Transactions on Autonomous Mental Development (TAMD

    Extraction of Spectral Functions from Dyson-Schwinger Studies via the Maximum Entropy Method

    Get PDF
    It is shown how to apply the Maximum Entropy Method (MEM) to numerical Dyson-Schwinger studies for the extraction of spectral functions of correlators from their corresponding Euclidean propagators. Differences to the application in lattice QCD are emphasized and, as an example, the spectral functions of massless quarks in cold and dense matter are presented.Comment: 16 pages, 7 figure
    • …
    corecore