38,809 research outputs found

    Partially Coupled Codes for TB-based Transmission

    Full text link
    In this thesis, we mainly investigate the design of partially coupled codes for transport block (TB) based transmission protocol adopted in 4G/5G mobile network standards. In this protocol, an information sequence in a TB is segmented into multiple code blocks (CBs) and each CB is protected by a channel codeword independently. It is inefficient in terms of transmit power and spectrum efficiency because any erroneous CB in a TB leads to the retransmission of the whole TB. An important research problem related to this TB-based transmission is how to improve the TB error rate (TBER) performance so that the number of retransmissions reduces. To tackle this challenge, we present a class of spatial coupling techniques called partial coupling in the TB encoding operation, which has two subclasses: partial information coupled (PIC) and partial parity coupling (PPC). To be specific, the coupling is performed such that a fraction of the information/parity sequence of one component code at the current CB is used as the input of the component encoder at the next CB, leading to improved TBER performance. One of the appealing features of partial coupling (both PIC and PPC) is that the coupling can be applied to any component codes without changing their encoding and decoding architectures, making them compatible with the TB-based transmission protocol. The main body of this thesis consists of two parts. In the first part, we apply both PIC and PPC to turbo codes. We investigate various coupling designs and analysis the performance of the partially coupled turbo codes over the binary erasure channel via density evolution (DE). Both simulation results and DE analysis show that such a class of codes can approach channel capacity with a large blocklength. In the second part, we construct PIC-polar codes. We show that PIC can effectively improve the error performance of finite-length polar codes by utilizing the channel polarization phenomenon. The DE-based performance analysis is also conducted. For both turbo codes and polar codes, we have shown that the partially coupled codes have significant performance gain over their uncoupled counterpart, demonstrating the effectiveness of the partial coupling

    Rate-Equivocation Optimal Spatially Coupled LDPC Codes for the BEC Wiretap Channel

    Full text link
    We consider transmission over a wiretap channel where both the main channel and the wiretapper's channel are Binary Erasure Channels (BEC). We use convolutional LDPC ensembles based on the coset encoding scheme. More precisely, we consider regular two edge type convolutional LDPC ensembles. We show that such a construction achieves the whole rate-equivocation region of the BEC wiretap channel. Convolutional LDPC ensemble were introduced by Felstr\"om and Zigangirov and are known to have excellent thresholds. Recently, Kudekar, Richardson, and Urbanke proved that the phenomenon of "Spatial Coupling" converts MAP threshold into BP threshold for transmission over the BEC. The phenomenon of spatial coupling has been observed to hold for general binary memoryless symmetric channels. Hence, we conjecture that our construction is a universal rate-equivocation achieving construction when the main channel and wiretapper's channel are binary memoryless symmetric channels, and the wiretapper's channel is degraded with respect to the main channel.Comment: Working pape

    Parallel Anisotropic Unstructured Grid Adaptation

    Get PDF
    Computational Fluid Dynamics (CFD) has become critical to the design and analysis of aerospace vehicles. Parallel grid adaptation that resolves multiple scales with anisotropy is identified as one of the challenges in the CFD Vision 2030 Study to increase the capacity and capability of CFD simulation. The Study also cautions that computer architectures are undergoing a radical change and dramatic increases in algorithm concurrency will be required to exploit full performance. This paper reviews four different methods to parallel anisotropic grid generation. They cover both ends of the spectrum: (i) using existing state-of-the-art software optimized for a single core and modifying it for parallel platforms and (ii) designing and implementing scalable software with incomplete, but rapidly maturating functionality. A brief overview for each grid adaptation system is presented in the context of a telescopic approach for multilevel concurrency. These methods employ different approaches to enable parallel execution, which provides a unique opportunity to illustrate the relative behavior of each approach. Qualitative and quantitative metric evaluations are used to draw lessons for future developments in this critical area for parallel CFD simulation

    High-Girth Matrices and Polarization

    Full text link
    The girth of a matrix is the least number of linearly dependent columns, in contrast to the rank which is the largest number of linearly independent columns. This paper considers the construction of {\it high-girth} matrices, whose probabilistic girth is close to its rank. Random matrices can be used to show the existence of high-girth matrices with constant relative rank, but the construction is non-explicit. This paper uses a polar-like construction to obtain a deterministic and efficient construction of high-girth matrices for arbitrary fields and relative ranks. Applications to coding and sparse recovery are discussed

    Approaching the Rate-Distortion Limit with Spatial Coupling, Belief propagation and Decimation

    Get PDF
    We investigate an encoding scheme for lossy compression of a binary symmetric source based on simple spatially coupled Low-Density Generator-Matrix codes. The degree of the check nodes is regular and the one of code-bits is Poisson distributed with an average depending on the compression rate. The performance of a low complexity Belief Propagation Guided Decimation algorithm is excellent. The algorithmic rate-distortion curve approaches the optimal curve of the ensemble as the width of the coupling window grows. Moreover, as the check degree grows both curves approach the ultimate Shannon rate-distortion limit. The Belief Propagation Guided Decimation encoder is based on the posterior measure of a binary symmetric test-channel. This measure can be interpreted as a random Gibbs measure at a "temperature" directly related to the "noise level of the test-channel". We investigate the links between the algorithmic performance of the Belief Propagation Guided Decimation encoder and the phase diagram of this Gibbs measure. The phase diagram is investigated thanks to the cavity method of spin glass theory which predicts a number of phase transition thresholds. In particular the dynamical and condensation "phase transition temperatures" (equivalently test-channel noise thresholds) are computed. We observe that: (i) the dynamical temperature of the spatially coupled construction saturates towards the condensation temperature; (ii) for large degrees the condensation temperature approaches the temperature (i.e. noise level) related to the information theoretic Shannon test-channel noise parameter of rate-distortion theory. This provides heuristic insight into the excellent performance of the Belief Propagation Guided Decimation algorithm. The paper contains an introduction to the cavity method

    Simulating planet migration in globally evolving disks

    Get PDF
    Numerical simulations of planet-disk interactions are usually performed with hydro-codes that -- because they consider only an annulus of the disk, over a 2D grid -- can not take into account the global evolution of the disk. However, the latter governs planetary migration of type II, so that the accuracy of the planetary evolution can be questioned. To develop an algorithm that models the local planet-disk interactions together with the global viscous evolution of the disk, we surround the usual 2D grid with a 1D grid ranging over the real extension of the disk. The 1D and 2D grids are coupled at their common boundaries via ghost rings, paying particular attention to the fluxes at the interface, especially the flux of angular momentum carried by waves. The computation is done in the frame centered on the center of mass to ensure angular momentum conservation. The global evolution of the disk and the local planet-disk interactions are both well described and the feedback of one on the other can be studied with this algorithm, for a negligible additional computing cost with respect to usual algorithms.Comment: 12 pages, 11 figures, accepted for publication in A&

    On privacy amplification, lossy compression, and their duality to channel coding

    Full text link
    We examine the task of privacy amplification from information-theoretic and coding-theoretic points of view. In the former, we give a one-shot characterization of the optimal rate of privacy amplification against classical adversaries in terms of the optimal type-II error in asymmetric hypothesis testing. This formulation can be easily computed to give finite-blocklength bounds and turns out to be equivalent to smooth min-entropy bounds by Renner and Wolf [Asiacrypt 2005] and Watanabe and Hayashi [ISIT 2013], as well as a bound in terms of the EÎłE_\gamma divergence by Yang, Schaefer, and Poor [arXiv:1706.03866 [cs.IT]]. In the latter, we show that protocols for privacy amplification based on linear codes can be easily repurposed for channel simulation. Combined with known relations between channel simulation and lossy source coding, this implies that privacy amplification can be understood as a basic primitive for both channel simulation and lossy compression. Applied to symmetric channels or lossy compression settings, our construction leads to proto- cols of optimal rate in the asymptotic i.i.d. limit. Finally, appealing to the notion of channel duality recently detailed by us in [IEEE Trans. Info. Theory 64, 577 (2018)], we show that linear error-correcting codes for symmetric channels with quantum output can be transformed into linear lossy source coding schemes for classical variables arising from the dual channel. This explains a "curious duality" in these problems for the (self-dual) erasure channel observed by Martinian and Yedidia [Allerton 2003; arXiv:cs/0408008] and partly anticipates recent results on optimal lossy compression by polar and low-density generator matrix codes.Comment: v3: updated to include equivalence of the converse bound with smooth entropy formulations. v2: updated to include comparison with the one-shot bounds of arXiv:1706.03866. v1: 11 pages, 4 figure

    [Report of] Specialist Committee V.4: ocean, wind and wave energy utilization

    No full text
    The committee's mandate was :Concern for structural design of ocean energy utilization devices, such as offshore wind turbines, support structures and fixed or floating wave and tidal energy converters. Attention shall be given to the interaction between the load and the structural response and shall include due consideration of the stochastic nature of the waves, current and wind

    Spatial noise filtering through error correction for quantum sensing

    Get PDF
    Quantum systems can be used to measure various quantities in their environment with high precision. Often, however, their sensitivity is limited by the decohering effects of this same environment. Dynamical decoupling schemes are widely used to filter environmental noise from signals, but their performance is limited by the spectral properties of the signal and noise at hand. Quantum error correction schemes have therefore emerged as a complementary technique without the same limitations. To date, however, they have failed to correct the dominant noise type in many quantum sensors, which couples to each qubit in a sensor in the same way as the signal. Here we show how quantum error correction can correct for such noise, which dynamical decoupling can only partially address. Whereas dynamical decoupling exploits temporal noise correlations in signal and noise, our scheme exploits spatial correlations. We give explicit examples in small quantum devices and demonstrate a method by which error-correcting codes can be tailored to their noise.Comment: 8 pages, 2 figures, RevTeX 4.1. v2: Updated to match published versio

    New advances in photoionisation codes: How and what for?

    Full text link
    The study of photoionised gas in planetary nebulae (PNe) has played a major role in the achievement, over the years, of a better understanding of a number of physical processes, pertinent to a broader range of fields than that of PNe studies, spanning from atomic physics to stellar evolution theories. Whilst empirical techniques are routinely employed for the analysis of the emission line spectra of these objects, the accurate interpretation of the observational data often requires the solution of a set of coupled equations, via the application of a photoionisation/plasma code. A number of large-scale codes have been developed since the late sixties, using various analytical or statistical techniques for the transfer of continuum radiation, mainly under the assumption of spherical symmetry and a few in 3D. These codes have been proved to be powerful and in many cases essential tools, but a clear idea of the underlying physical processes and assumptions is necessary in order to avoid reaching misleading conclusions. A brief review of the field of photoionisation today is given here, with emphasis on the recent developments, including the expansion of the models to the 3D domain. Attention is given to the identification of new available observational constraints and how these can used to extract useful information from realistic models. (abridged)Comment: 8 pages, 3 figures, conference proceeding
    • …
    corecore