43,892 research outputs found
Computation Over Gaussian Networks With Orthogonal Components
Function computation of arbitrarily correlated discrete sources over Gaussian
networks with orthogonal components is studied. Two classes of functions are
considered: the arithmetic sum function and the type function. The arithmetic
sum function in this paper is defined as a set of multiple weighted arithmetic
sums, which includes averaging of the sources and estimating each of the
sources as special cases. The type or frequency histogram function counts the
number of occurrences of each argument, which yields many important statistics
such as mean, variance, maximum, minimum, median, and so on. The proposed
computation coding first abstracts Gaussian networks into the corresponding
modulo sum multiple-access channels via nested lattice codes and linear network
coding and then computes the desired function by using linear Slepian-Wolf
source coding. For orthogonal Gaussian networks (with no broadcast and
multiple-access components), the computation capacity is characterized for a
class of networks. For Gaussian networks with multiple-access components (but
no broadcast), an approximate computation capacity is characterized for a class
of networks.Comment: 30 pages, 12 figures, submitted to IEEE Transactions on Information
Theor
Computation Alignment: Capacity Approximation without Noise Accumulation
Consider several source nodes communicating across a wireless network to a
destination node with the help of several layers of relay nodes. Recent work by
Avestimehr et al. has approximated the capacity of this network up to an
additive gap. The communication scheme achieving this capacity approximation is
based on compress-and-forward, resulting in noise accumulation as the messages
traverse the network. As a consequence, the approximation gap increases
linearly with the network depth.
This paper develops a computation alignment strategy that can approach the
capacity of a class of layered, time-varying wireless relay networks up to an
approximation gap that is independent of the network depth. This strategy is
based on the compute-and-forward framework, which enables relays to decode
deterministic functions of the transmitted messages. Alone, compute-and-forward
is insufficient to approach the capacity as it incurs a penalty for
approximating the wireless channel with complex-valued coefficients by a
channel with integer coefficients. Here, this penalty is circumvented by
carefully matching channel realizations across time slots to create
integer-valued effective channels that are well-suited to compute-and-forward.
Unlike prior constant gap results, the approximation gap obtained in this paper
also depends closely on the fading statistics, which are assumed to be i.i.d.
Rayleigh.Comment: 36 pages, to appear in IEEE Transactions on Information Theor
Robust Successive Compute-and-Forward over Multi-User Multi-Relay Networks
This paper develops efficient Compute-and-forward (CMF) schemes in multi-user
multi-relay networks. To solve the rank failure problem in CMF setups and to
achieve full diversity of the network, we introduce two novel CMF methods,
namely, extended CMF and successive CMF. The former, having low complexity, is
based on recovering multiple equations at relays. The latter utilizes
successive interference cancellation (SIC) to enhance the system performance
compared to the state-of-the-art schemes. Both methods can be utilized in a
network with different number of users, relays, and relay antennas, with
negligible feedback channels or signaling overhead. We derive new concise
formulations and explicit framework for the successive CMF method as well as an
approach to reduce its computational complexity. Our theoretical analysis and
computer simulations demonstrate the superior performance of our proposed CMF
methods over the conventional schemes. Furthermore, based on our simulation
results, the successive CMF method yields additional signal-to-noise ratio
gains and shows considerable robustness against channel estimation error,
compared to the extended CMF method.Comment: 44 pages, 10 figures, 1 table, accepted to be published in IEEE
Trans. on Vehicular Tec
Continuous-variable quantum neural networks
We introduce a general method for building neural networks on quantum
computers. The quantum neural network is a variational quantum circuit built in
the continuous-variable (CV) architecture, which encodes quantum information in
continuous degrees of freedom such as the amplitudes of the electromagnetic
field. This circuit contains a layered structure of continuously parameterized
gates which is universal for CV quantum computation. Affine transformations and
nonlinear activation functions, two key elements in neural networks, are
enacted in the quantum network using Gaussian and non-Gaussian gates,
respectively. The non-Gaussian gates provide both the nonlinearity and the
universality of the model. Due to the structure of the CV model, the CV quantum
neural network can encode highly nonlinear transformations while remaining
completely unitary. We show how a classical network can be embedded into the
quantum formalism and propose quantum versions of various specialized model
such as convolutional, recurrent, and residual networks. Finally, we present
numerous modeling experiments built with the Strawberry Fields software
library. These experiments, including a classifier for fraud detection, a
network which generates Tetris images, and a hybrid classical-quantum
autoencoder, demonstrate the capability and adaptability of CV quantum neural
networks
- …