507 research outputs found

    Reed-Muller codes for random erasures and errors

    Full text link
    This paper studies the parameters for which Reed-Muller (RM) codes over GF(2)GF(2) can correct random erasures and random errors with high probability, and in particular when can they achieve capacity for these two classical channels. Necessarily, the paper also studies properties of evaluations of multi-variate GF(2)GF(2) polynomials on random sets of inputs. For erasures, we prove that RM codes achieve capacity both for very high rate and very low rate regimes. For errors, we prove that RM codes achieve capacity for very low rate regimes, and for very high rates, we show that they can uniquely decode at about square root of the number of errors at capacity. The proofs of these four results are based on different techniques, which we find interesting in their own right. In particular, we study the following questions about E(m,r)E(m,r), the matrix whose rows are truth tables of all monomials of degree ≤r\leq r in mm variables. What is the most (resp. least) number of random columns in E(m,r)E(m,r) that define a submatrix having full column rank (resp. full row rank) with high probability? We obtain tight bounds for very small (resp. very large) degrees rr, which we use to show that RM codes achieve capacity for erasures in these regimes. Our decoding from random errors follows from the following novel reduction. For every linear code CC of sufficiently high rate we construct a new code C′C', also of very high rate, such that for every subset SS of coordinates, if CC can recover from erasures in SS, then C′C' can recover from errors in SS. Specializing this to RM codes and using our results for erasures imply our result on unique decoding of RM codes at high rate. Finally, two of our capacity achieving results require tight bounds on the weight distribution of RM codes. We obtain such bounds extending the recent \cite{KLP} bounds from constant degree to linear degree polynomials

    Optimal and Efficient Decoding of Concatenated Quantum Block Codes

    Get PDF
    We consider the problem of optimally decoding a quantum error correction code -- that is to find the optimal recovery procedure given the outcomes of partial "check" measurements on the system. In general, this problem is NP-hard. However, we demonstrate that for concatenated block codes, the optimal decoding can be efficiently computed using a message passing algorithm. We compare the performance of the message passing algorithm to that of the widespread blockwise hard decoding technique. Our Monte Carlo results using the 5 qubit and Steane's code on a depolarizing channel demonstrate significant advantages of the message passing algorithms in two respects. 1) Optimal decoding increases by as much as 94% the error threshold below which the error correction procedure can be used to reliably send information over a noisy channel. 2) For noise levels below these thresholds, the probability of error after optimal decoding is suppressed at a significantly higher rate, leading to a substantial reduction of the error correction overhead.Comment: Published versio

    Hidden Markov Models and their Application for Predicting Failure Events

    Full text link
    We show how Markov mixed membership models (MMMM) can be used to predict the degradation of assets. We model the degradation path of individual assets, to predict overall failure rates. Instead of a separate distribution for each hidden state, we use hierarchical mixtures of distributions in the exponential family. In our approach the observation distribution of the states is a finite mixture distribution of a small set of (simpler) distributions shared across all states. Using tied-mixture observation distributions offers several advantages. The mixtures act as a regularization for typically very sparse problems, and they reduce the computational effort for the learning algorithm since there are fewer distributions to be found. Using shared mixtures enables sharing of statistical strength between the Markov states and thus transfer learning. We determine for individual assets the trade-off between the risk of failure and extended operating hours by combining a MMMM with a partially observable Markov decision process (POMDP) to dynamically optimize the policy for when and how to maintain the asset.Comment: Will be published in the proceedings of ICCS 2020; @Booklet{EasyChair:3183, author = {Paul Hofmann and Zaid Tashman}, title = {Hidden Markov Models and their Application for Predicting Failure Events}, howpublished = {EasyChair Preprint no. 3183}, year = {EasyChair, 2020}

    Quantifying the Performance of Quantum Codes

    Full text link
    We study the properties of error correcting codes for noise models in the presence of asymmetries and/or correlations by means of the entanglement fidelity and the code entropy. First, we consider a dephasing Markovian memory channel and characterize the performance of both a repetition code and an error avoiding code in terms of the entanglement fidelity. We also consider the concatenation of such codes and show that it is especially advantageous in the regime of partial correlations. Finally, we characterize the effectiveness of the codes and their concatenation by means of the code entropy and find, in particular, that the effort required for recovering such codes decreases when the error probability decreases and the memory parameter increases. Second, we consider both symmetric and asymmetric depolarizing noisy quantum memory channels and perform quantum error correction via the five qubit stabilizer code. We characterize this code by means of the entanglement fidelity and the code entropy as function of the asymmetric error probabilities and the degree of memory. Specifically, we uncover that while the asymmetry in the depolarizing errors does not affect the entanglement fidelity of the five qubit code, it becomes a relevant feature when the code entropy is used as a performance quantifier.Comment: 21 pages, 10 figure

    An iterative algorithm for parametrization of shortest length shift registers over finite rings

    Get PDF
    The construction of shortest feedback shift registers for a finite sequence S_1,...,S_N is considered over the finite ring Z_{p^r}. A novel algorithm is presented that yields a parametrization of all shortest feedback shift registers for the sequence of numbers S_1,...,S_N, thus solving an open problem in the literature. The algorithm iteratively processes each number, starting with S_1, and constructs at each step a particular type of minimal Gr\"obner basis. The construction involves a simple update rule at each step which leads to computational efficiency. It is shown that the algorithm simultaneously computes a similar parametrization for the reciprocal sequence S_N,...,S_1.Comment: Submitte

    Good Quantum Convolutional Error Correction Codes And Their Decoding Algorithm Exist

    Get PDF
    Quantum convolutional code was introduced recently as an alternative way to protect vital quantum information. To complete the analysis of quantum convolutional code, I report a way to decode certain quantum convolutional codes based on the classical Viterbi decoding algorithm. This decoding algorithm is optimal for a memoryless channel. I also report three simple criteria to test if decoding errors in a quantum convolutional code will terminate after a finite number of decoding steps whenever the Hilbert space dimension of each quantum register is a prime power. Finally, I show that certain quantum convolutional codes are in fact stabilizer codes. And hence, these quantum stabilizer convolutional codes have fault-tolerant implementations.Comment: Minor changes, to appear in PR

    Mixed quantum state detection with inconclusive results

    Get PDF
    We consider the problem of designing an optimal quantum detector with a fixed rate of inconclusive results that maximizes the probability of correct detection, when distinguishing between a collection of mixed quantum states. We develop a sufficient condition for the scaled inverse measurement to maximize the probability of correct detection for the case in which the rate of inconclusive results exceeds a certain threshold. Using this condition we derive the optimal measurement for linearly independent pure-state sets, and for mixed-state sets with a broad class of symmetries. Specifically, we consider geometrically uniform (GU) state sets and compound geometrically uniform (CGU) state sets with generators that satisfy a certain constraint. We then show that the optimal measurements corresponding to GU and CGU state sets with arbitrary generators are also GU and CGU respectively, with generators that can be computed very efficiently in polynomial time within any desired accuracy by solving a semidefinite programming problem.Comment: Submitted to Phys. Rev.

    Composition codes

    Get PDF
    In this paper we introduce a special class of 2D convolutional codes, called composition codes, which admit encoders G(d1,d2) that can be decomposed as the product of two 1D encoders, i.e., G(d1,d2)=G2(d2)G1(d1). Taking into account this decomposition, we obtain syndrome formers of the code directly from G1(d1) andG2(d2), in case G1(d1) andG2(d2) are right prime. Moreover we consider 2D state-space realizations by means of a separable Roesser model of the encoders and syndrome formers of a composition code and we investigate the minimality of such realizations. In particular, we obtain minimal realizations for composition codes which admit an encoder G(d1,d2)=G2(d2)G1(d1) withG2(d2) a systematic 1D encoder. Finally, we investigate the minimality of 2D separable Roesser state-space realizations for syndrome formers of these codes.publishe

    Dentine Oxygn Isotopes (δ18O) as a Proxy for Odontocete Distributions and Movements.

    Get PDF
    Spatial variation in marine oxygen isotope ratios ( δ18O) resulting from differential evaporation rates and precipitation inputs is potentially useful for characterizing marine mammal distributions and tracking movements across δ18O gradients. Dentine hydroxyapatite contains carbonate and phosphate that precipitate in oxygen isotopic equilibrium with body water, which in odontocetes closely tracks the isotopic composition of ambient water. To test whether dentine oxygen isotope composition reliably records that of ambient water and can therefore serve as a proxy for odontocete distribution and movement patterns, we measured δ18O values of dentine structural carbonate (δ18OSC) and phosphate (δ18OP) of seven odontocete species (n = 55 individuals) from regional marine water bodies spanning a surface water δ18O range of several per mil. Mean dentine δ18OSC (range +21.2 to +25.5‰ VSMOW) and δ18OP (+16.7 to +20.3‰) values were strongly correlated with marine surface water δ18O values, with lower dentine δ18OSC and δ18OP values in high-latitude regions (Arctic and Eastern North Pacific) and higher values in the Gulf of California, Gulf of Mexico, and Mediterranean Sea. Correlations between dentine δ18OSC and δ18OP values with marine surface water δ18O values indicate that sequential δ18O measurements along dentine, which grows incrementally and archives intra- and interannual isotopic composition over the lifetime of the animal, would be useful for characterizing residency within and movements among water bodies with strong δ18O gradients, particularly between polar and lower latitudes, or between oceans and marginal basins

    Complexity of Discrete Energy Minimization Problems

    Full text link
    Discrete energy minimization is widely-used in computer vision and machine learning for problems such as MAP inference in graphical models. The problem, in general, is notoriously intractable, and finding the global optimal solution is known to be NP-hard. However, is it possible to approximate this problem with a reasonable ratio bound on the solution quality in polynomial time? We show in this paper that the answer is no. Specifically, we show that general energy minimization, even in the 2-label pairwise case, and planar energy minimization with three or more labels are exp-APX-complete. This finding rules out the existence of any approximation algorithm with a sub-exponential approximation ratio in the input size for these two problems, including constant factor approximations. Moreover, we collect and review the computational complexity of several subclass problems and arrange them on a complexity scale consisting of three major complexity classes -- PO, APX, and exp-APX, corresponding to problems that are solvable, approximable, and inapproximable in polynomial time. Problems in the first two complexity classes can serve as alternative tractable formulations to the inapproximable ones. This paper can help vision researchers to select an appropriate model for an application or guide them in designing new algorithms.Comment: ECCV'16 accepte
    • …
    corecore