28 research outputs found
Moderate Deviation Analysis for Classical Communication over Quantum Channels
© 2017, Springer-Verlag GmbH Germany. We analyse families of codes for classical data transmission over quantum channels that have both a vanishing probability of error and a code rate approaching capacity as the code length increases. To characterise the fundamental tradeoff between decoding error, code rate and code length for such codes we introduce a quantum generalisation of the moderate deviation analysis proposed by AltÅg and Wagner as well as Polyanskiy and Verdú. We derive such a tradeoff for classical-quantum (as well as image-additive) channels in terms of the channel capacity and the channel dispersion, giving further evidence that the latter quantity characterises the necessary backoff from capacity when transmitting finite blocks of classical data. To derive these results we also study asymmetric binary quantum hypothesis testing in the moderate deviations regime. Due to the central importance of the latter task, we expect that our techniques will find further applications in the analysis of other quantum information processing tasks
Comparative Study of Inference Methods for Bayesian Nonnegative Matrix Factorisation
In this paper, we study the trade-offs of different inference approaches for
Bayesian matrix factorisation methods, which are commonly used for predicting
missing values, and for finding patterns in the data. In particular, we
consider Bayesian nonnegative variants of matrix factorisation and
tri-factorisation, and compare non-probabilistic inference, Gibbs sampling,
variational Bayesian inference, and a maximum-a-posteriori approach. The
variational approach is new for the Bayesian nonnegative models. We compare
their convergence, and robustness to noise and sparsity of the data, on both
synthetic and real-world datasets. Furthermore, we extend the models with the
Bayesian automatic relevance determination prior, allowing the models to
perform automatic model selection, and demonstrate its efficiency
Hub-Centered Gene Network Reconstruction Using Automatic Relevance Determination
Network inference deals with the reconstruction of biological networks from experimental data. A variety of different reverse engineering techniques are available; they differ in the underlying assumptions and mathematical models used. One common problem for all approaches stems from the complexity of the task, due to the combinatorial explosion of different network topologies for increasing network size. To handle this problem, constraints are frequently used, for example on the node degree, number of edges, or constraints on regulation functions between network components. We propose to exploit topological considerations in the inference of gene regulatory networks. Such systems are often controlled by a small number of hub genes, while most other genes have only limited influence on the network's dynamic. We model gene regulation using a Bayesian network with discrete, Boolean nodes. A hierarchical prior is employed to identify hub genes. The first layer of the prior is used to regularize weights on edges emanating from one specific node. A second prior on hyperparameters controls the magnitude of the former regularization for different nodes. The net effect is that central nodes tend to form in reconstructed networks. Network reconstruction is then performed by maximization of or sampling from the posterior distribution. We evaluate our approach on simulated and real experimental data, indicating that we can reconstruct main regulatory interactions from the data. We furthermore compare our approach to other state-of-the art methods, showing superior performance in identifying hubs. Using a large publicly available dataset of over 800 cell cycle regulated genes, we are able to identify several main hub genes. Our method may thus provide a valuable tool to identify interesting candidate genes for further study. Furthermore, the approach presented may stimulate further developments in regularization methods for network reconstruction from data
A tight upper bound for the third-order asymptotics of discrete memoryless channels
This paper shows that the logarithm of the ε-error capacity (average error probability) for n uses of a discrete memoryless channel with positive conditional information variance at every capacity-achieving input distribution is upper bounded by the normal approximation plus a term that does not exceed 1/2 log n + O(1). © 2013 IEEE
Error exponent of the common-message broadcast channel with variable-length feedback
© 2017 IEEE. We derive upper and lower bounds on the reliability function for the discrete memoryless broadcast channel with common message and variable-length feedback. We show that the bounds are tight when the broadcast channel is stochastically degraded. We adapt and supplement new ideas to Yamamoto and Itoh's two-phase coding scheme for the direct part and Burnashev's proof technique for the converse part
The Reliability Function of Variable-Length Lossy Joint Source-Channel Coding with Feedback
We consider transmission of discrete memoryless sources (DMSes) across discrete memoryless channels (DMCs) using variable-length lossy source-channel codes with feedback. The reliability function (optimum error exponent) is shown to be equal to max {0, B(1-R(D)/C)\}, where R(D) is the rate-distortion function of the source, B is the maximum relative entropy between output distributions of the DMC, and C is the Shannon capacity of the channel. We show that in this asymptotic regime, separate source-channel coding is, in fact, optimal
Second order refinements for the classical capacity of quantum channels with separable input states
We study the non-asymptotic fundamental limits for transmitting classical information over memoryless quantum channels, i.e. we investigate the amount of information that can be transmitted when the channel is used a finite number of times and a finite average decoding error is permissible. We show that, if we restrict the encoder to use ensembles of separable states, the non-asymptotic fundamental limit admits a Gaussian approximation that illustrates the speed at which the rate of optimal codes converges to the Holevo capacity as the number of channel uses tends to infinity. To do so, several important properties of quantum information quantities, such as the capacity-achieving output state, the divergence radius, and the channel dispersion, are generalized from their classical counterparts. Further, we exploit a close relation between classical-quantum channel coding and quantum binary hypothesis testing and rely on recent progress in the non-asymptotic characterization of quantum hypothesis testing and its Gaussian approximation. © 2014 IEEE
On the Gaussian MAC with stop-feedback
© 2017 IEEE. We characterize the information-theoretic limits of the Gaussian multiple access channel (MAC) when variable-length stop-feedback is available at the encoder and a non-vanishing error probability is permitted. Due to the continuous nature of the channel and the presence of expected power constraints, we need to develop new achievability and converse techniques. Due to the multi-terminal nature of the channel model, we are faced with the need to bound the asymptotic behavior of the expected value of the maximum of several stopping times. We do so by leveraging tools from renewal theory developed by Gut (1974) and Lai and Siegmund (1979)
Moderate deviation asymptotics for variable-length codes with feedback
We consider data transmission across discrete memoryless channels (DMCs) using variable-length codes with feedback. We consider the family of such codes whose rates are \rho-{N} below the channel capacity C , where \rho-{N} is a positive sequence that tends to zero slower than the reciprocal of the square root of the expectation of the (random) blocklength N. This is known as the moderate deviations regime, and we establish the optimal moderate deviations constant. We show that in this scenario, the error probability decays sub-exponentially with speed \exp (-(B/C)N\rho-{N}) , where B is the maximum relative entropy between output distributions of the DMC