101,394 research outputs found

    Bit-wise Unequal Error Protection for Variable Length Block Codes with Feedback

    Get PDF
    The bit-wise unequal error protection problem, for the case when the number of groups of bits \ell is fixed, is considered for variable length block codes with feedback. An encoding scheme based on fixed length block codes with erasures is used to establish inner bounds to the achievable performance for finite expected decoding time. A new technique for bounding the performance of variable length block codes is used to establish outer bounds to the performance for a given expected decoding time. The inner and the outer bounds match one another asymptotically and characterize the achievable region of rate-exponent vectors, completely. The single message message-wise unequal error protection problem for variable length block codes with feedback is also solved as a necessary step on the way.Comment: 41 pages, 3 figure

    Analysis and Decoding of Linear Lee-Metric Codes with Application to Code-Based Cryptography

    Get PDF
    Lee-metric codes are defined over integer residue rings endowed with the Lee metric. Even though the metric is one of the oldest metric considered in coding-theroy and has interesting applications in, for instance, DNA storage and code-based cryptography, it received relatively few attentions compared to other distances like the Hamming metric or the rank metric. Hence, codes in the Lee metric are still less studied than codes in other metrics. Recently, the interest in the Lee metric increased due to its similarities with the Euclidean norm used in lattice-based cryptosystem. Additionally, it is a promising metric to reduce the key sizes or signature sizes in code-based cryptosystem. However, basic coding-theoretic concepts, such as a tight Singleton-like bound or the construction of optimal codes, are still open problems. Thus, in this thesis we focus on some open problems in the Lee metric and Lee-metric codes. Firstly, we introduce generalized weights for the Lee metric in different settings by adapting the existing theory for the Hamming metric over finite rings. We discuss their utility and derive new Singleton-like bounds in the Lee metric. Eventually, we abandon the classical idea of generalized weights and introduce generalized distances based on the algebraic structure of integer residue rings. This allows us to provide a novel and improved Singleton-like bound in the Lee metric over integer residue rings. For all the bounds we discuss the density of their optimal codes. Originally, the Lee metric has been introduced over a qq-ary alphabet to cope with phase shift modulation. We consider two channel models in the Lee metric. The first is a memoryless channel matching to the Lee metric under the decoding rule ``decode to the nearest codeword''. The second model is a block-wise channel introducing an error of fixed Lee weight, motivated by code-based cryptography where errors of fixed weight are added intentionally. We show that both channels coincide in the limit of large block length, meaning that their marginal distributions match. This distribution enables to provide bounds on the asymptotic growth rate of the surface and volume spectrum of spheres and balls in the Lee metric, and to derive bounds on the block error probability of the two channel models in terms of random coding union bounds. As vectors of fixed Lee weight are also of interest to cryptographic applications, we discuss the problem of scalar multiplication in the Lee metric in the asymptotic regime and in a finite-length setting. The Lee weight of a vector may be increased or decreased by the product with a nontrivial scalar. From a cryptographic view point this problem is interesting, since an attacker may be able to reduce the weight of the error and hence reduce the complexity of the underlying problem. The construction of a vector with constant Lee weight using integer partitions is analyzed and an efficient method for drawing vectors of constant Lee weight uniformly at random from the set of all such vectors is given. We then focus on regular LDPC code families defined over integer residue rings and analyze their performance with respect to the Lee metric. We determine the expected Lee weight enumerator for a random code in fixed regular LDPC code ensemble and analyze its asymptotic growth rate. This allows us to estimate the expected decoding error probability. Eventually, we estimate the error-correction performance of selected LDPC code families under belief propagation decoding and symbol message passing decoding and compare the performances. The thesis is concluded with an application of the results derived to code-based cryptography. Namely, we apply the marginal distribution to improve the yet known fastest Lee-information set decoding algorithm

    A single-photon sampling architecture for solid-state imaging

    Full text link
    Advances in solid-state technology have enabled the development of silicon photomultiplier sensor arrays capable of sensing individual photons. Combined with high-frequency time-to-digital converters (TDCs), this technology opens up the prospect of sensors capable of recording with high accuracy both the time and location of each detected photon. Such a capability could lead to significant improvements in imaging accuracy, especially for applications operating with low photon fluxes such as LiDAR and positron emission tomography. The demands placed on on-chip readout circuitry imposes stringent trade-offs between fill factor and spatio-temporal resolution, causing many contemporary designs to severely underutilize the technology's full potential. Concentrating on the low photon flux setting, this paper leverages results from group testing and proposes an architecture for a highly efficient readout of pixels using only a small number of TDCs, thereby also reducing both cost and power consumption. The design relies on a multiplexing technique based on binary interconnection matrices. We provide optimized instances of these matrices for various sensor parameters and give explicit upper and lower bounds on the number of TDCs required to uniquely decode a given maximum number of simultaneous photon arrivals. To illustrate the strength of the proposed architecture, we note a typical digitization result of a 120x120 photodiode sensor on a 30um x 30um pitch with a 40ps time resolution and an estimated fill factor of approximately 70%, using only 161 TDCs. The design guarantees registration and unique recovery of up to 4 simultaneous photon arrivals using a fast decoding algorithm. In a series of realistic simulations of scintillation events in clinical positron emission tomography the design was able to recover the spatio-temporal location of 98.6% of all photons that caused pixel firings.Comment: 24 pages, 3 figures, 5 table

    Coding for Parallel Channels: Gallager Bounds for Binary Linear Codes with Applications to Repeat-Accumulate Codes and Variations

    Full text link
    This paper is focused on the performance analysis of binary linear block codes (or ensembles) whose transmission takes place over independent and memoryless parallel channels. New upper bounds on the maximum-likelihood (ML) decoding error probability are derived. These bounds are applied to various ensembles of turbo-like codes, focusing especially on repeat-accumulate codes and their recent variations which possess low encoding and decoding complexity and exhibit remarkable performance under iterative decoding. The framework of the second version of the Duman and Salehi (DS2) bounds is generalized to the case of parallel channels, along with the derivation of their optimized tilting measures. The connection between the generalized DS2 and the 1961 Gallager bounds, addressed by Divsalar and by Sason and Shamai for a single channel, is explored in the case of an arbitrary number of independent parallel channels. The generalization of the DS2 bound for parallel channels enables to re-derive specific bounds which were originally derived by Liu et al. as special cases of the Gallager bound. In the asymptotic case where we let the block length tend to infinity, the new bounds are used to obtain improved inner bounds on the attainable channel regions under ML decoding. The tightness of the new bounds for independent parallel channels is exemplified for structured ensembles of turbo-like codes. The improved bounds with their optimized tilting measures show, irrespectively of the block length of the codes, an improvement over the union bound and other previously reported bounds for independent parallel channels; this improvement is especially pronounced for moderate to large block lengths.Comment: Submitted to IEEE Trans. on Information Theory, June 2006 (57 pages, 9 figures

    The Price of Uncertain Priors in Source Coding

    Full text link
    We consider the problem of one-way communication when the recipient does not know exactly the distribution that the messages are drawn from, but has a "prior" distribution that is known to be close to the source distribution, a problem first considered by Juba et al. We consider the question of how much longer the messages need to be in order to cope with the uncertainty about the receiver's prior and the source distribution, respectively, as compared to the standard source coding problem. We consider two variants of this uncertain priors problem: the original setting of Juba et al. in which the receiver is required to correctly recover the message with probability 1, and a setting introduced by Haramaty and Sudan, in which the receiver is permitted to fail with some probability ϵ\epsilon. In both settings, we obtain lower bounds that are tight up to logarithmically smaller terms. In the latter setting, we furthermore present a variant of the coding scheme of Juba et al. with an overhead of logα+log1/ϵ+1\log\alpha+\log 1/\epsilon+1 bits, thus also establishing the nearly tight upper bound.Comment: To appear in IEEE Transactions on Information Theor
    corecore