709 research outputs found
Perfect Mannheim, Lipschitz and Hurwitz weight codes
In this paper, upper bounds on codes over Gaussian integers, Lipschitz
integers and Hurwitz integers with respect to Mannheim metric, Lipschitz and
Hurwitz metric are given.Comment: 21 page
Perfect 1-error-correcting Lipschitz weight codes
Let be a Lipschitz prime and . Perfect 1-error-correcting codes in are constructed for every prime number . This completes a result of the authors in an earlier work, emph{Perfect Mannheim, Lipschitz and Hurwitz weight codes}, (Mathematical Communications, Vol 19, No 2, pp. 253 -- 276 (2014)), where a construction is given in the case
Perfect 1-error-correcting Hurwitz weight codes
Let( ) be a Hurwitz prime and (). In this paper, we construct perfect 1-error-correcting codes in () for every prime number (), where () denotes the set of Hurwitz integers
Perfect 1-error-correcting Hurwitz weight codes
Let( ) be a Hurwitz prime and (). In this paper, we construct perfect 1-error-correcting codes in () for every prime number (), where () denotes the set of Hurwitz integers
Codes over Hurwitz integers
In this study, we obtain new classes of linear codes over Hurwitz integers
equipped with a new metric. We refer to the metric as Hurwitz metric. The codes
with respect to Hurwitz metric use in coded modu- lation schemes based on
quadrature amplitude modulation (QAM)-type constellations, for which neither
Hamming metric nor Lee metric. Also, we define decoding algorithms for these
codes when up to two coordinates of a transmitted code vector are effected by
error of arbitrary Hurwitz weight.Comment: 11 page
Online Learning of Quantum States
Suppose we have many copies of an unknown -qubit state . We measure
some copies of using a known two-outcome measurement , then other
copies using a measurement , and so on. At each stage , we generate a
current hypothesis about the state , using the outcomes of
the previous measurements. We show that it is possible to do this in a way that
guarantees that , the error in our prediction for the next
measurement, is at least at most times. Even in the "non-realizable" setting---where
there could be arbitrary noise in the measurement outcomes---we show how to
output hypothesis states that do significantly worse than the best possible
states at most times on the first
measurements. These results generalize a 2007 theorem by Aaronson on the
PAC-learnability of quantum states, to the online and regret-minimization
settings. We give three different ways to prove our results---using convex
optimization, quantum postselection, and sequential fat-shattering
dimension---which have different advantages in terms of parameters and
portability.Comment: 18 page
- …