28,117 research outputs found

    Distance Properties of Short LDPC Codes and their Impact on the BP, ML and Near-ML Decoding Performance

    Full text link
    Parameters of LDPC codes, such as minimum distance, stopping distance, stopping redundancy, girth of the Tanner graph, and their influence on the frame error rate performance of the BP, ML and near-ML decoding over a BEC and an AWGN channel are studied. Both random and structured LDPC codes are considered. In particular, the BP decoding is applied to the code parity-check matrices with an increasing number of redundant rows, and the convergence of the performance to that of the ML decoding is analyzed. A comparison of the simulated BP, ML, and near-ML performance with the improved theoretical bounds on the error probability based on the exact weight spectrum coefficients and the exact stopping size spectrum coefficients is presented. It is observed that decoding performance very close to the ML decoding performance can be achieved with a relatively small number of redundant rows for some codes, for both the BEC and the AWGN channels

    Permutation Decoding and the Stopping Redundancy Hierarchy of Cyclic and Extended Cyclic Codes

    Full text link
    We introduce the notion of the stopping redundancy hierarchy of a linear block code as a measure of the trade-off between performance and complexity of iterative decoding for the binary erasure channel. We derive lower and upper bounds for the stopping redundancy hierarchy via Lovasz's Local Lemma and Bonferroni-type inequalities, and specialize them for codes with cyclic parity-check matrices. Based on the observed properties of parity-check matrices with good stopping redundancy characteristics, we develop a novel decoding technique, termed automorphism group decoding, that combines iterative message passing and permutation decoding. We also present bounds on the smallest number of permutations of an automorphism group decoder needed to correct any set of erasures up to a prescribed size. Simulation results demonstrate that for a large number of algebraic codes, the performance of the new decoding method is close to that of maximum likelihood decoding.Comment: 40 pages, 6 figures, 10 tables, submitted to IEEE Transactions on Information Theor

    On the size of identifying codes in triangle-free graphs

    Get PDF
    In an undirected graph GG, a subset C⊆V(G)C\subseteq V(G) such that CC is a dominating set of GG, and each vertex in V(G)V(G) is dominated by a distinct subset of vertices from CC, is called an identifying code of GG. The concept of identifying codes was introduced by Karpovsky, Chakrabarty and Levitin in 1998. For a given identifiable graph GG, let \M(G) be the minimum cardinality of an identifying code in GG. In this paper, we show that for any connected identifiable triangle-free graph GG on nn vertices having maximum degree Δ≄3\Delta\geq 3, \M(G)\le n-\tfrac{n}{\Delta+o(\Delta)}. This bound is asymptotically tight up to constants due to various classes of graphs including (Δ−1)(\Delta-1)-ary trees, which are known to have their minimum identifying code of size n−nΔ−1+o(1)n-\tfrac{n}{\Delta-1+o(1)}. We also provide improved bounds for restricted subfamilies of triangle-free graphs, and conjecture that there exists some constant cc such that the bound \M(G)\le n-\tfrac{n}{\Delta}+c holds for any nontrivial connected identifiable graph GG

    A single-photon sampling architecture for solid-state imaging

    Full text link
    Advances in solid-state technology have enabled the development of silicon photomultiplier sensor arrays capable of sensing individual photons. Combined with high-frequency time-to-digital converters (TDCs), this technology opens up the prospect of sensors capable of recording with high accuracy both the time and location of each detected photon. Such a capability could lead to significant improvements in imaging accuracy, especially for applications operating with low photon fluxes such as LiDAR and positron emission tomography. The demands placed on on-chip readout circuitry imposes stringent trade-offs between fill factor and spatio-temporal resolution, causing many contemporary designs to severely underutilize the technology's full potential. Concentrating on the low photon flux setting, this paper leverages results from group testing and proposes an architecture for a highly efficient readout of pixels using only a small number of TDCs, thereby also reducing both cost and power consumption. The design relies on a multiplexing technique based on binary interconnection matrices. We provide optimized instances of these matrices for various sensor parameters and give explicit upper and lower bounds on the number of TDCs required to uniquely decode a given maximum number of simultaneous photon arrivals. To illustrate the strength of the proposed architecture, we note a typical digitization result of a 120x120 photodiode sensor on a 30um x 30um pitch with a 40ps time resolution and an estimated fill factor of approximately 70%, using only 161 TDCs. The design guarantees registration and unique recovery of up to 4 simultaneous photon arrivals using a fast decoding algorithm. In a series of realistic simulations of scintillation events in clinical positron emission tomography the design was able to recover the spatio-temporal location of 98.6% of all photons that caused pixel firings.Comment: 24 pages, 3 figures, 5 table
    • 

    corecore