5,914 research outputs found
Closed queueing networks under congestion: non-bottleneck independence and bottleneck convergence
We analyze the behavior of closed product-form queueing networks when the
number of customers grows to infinity and remains proportionate on each route
(or class). First, we focus on the stationary behavior and prove the conjecture
that the stationary distribution at non-bottleneck queues converges weakly to
the stationary distribution of an ergodic, open product-form queueing network.
This open network is obtained by replacing bottleneck queues with per-route
Poissonian sources whose rates are determined by the solution of a strictly
concave optimization problem. Then, we focus on the transient behavior of the
network and use fluid limits to prove that the amount of fluid, or customers,
on each route eventually concentrates on the bottleneck queues only, and that
the long-term proportions of fluid in each route and in each queue solve the
dual of the concave optimization problem that determines the throughputs of the
previous open network.Comment: 22 page
Decoding of Non-Binary LDPC Codes Using the Information Bottleneck Method
Recently, a novel lookup table based decoding method for binary low-density
parity-check codes has attracted considerable attention. In this approach,
mutual-information maximizing lookup tables replace the conventional operations
of the variable nodes and the check nodes in message passing decoding.
Moreover, the exchanged messages are represented by integers with very small
bit width. A machine learning framework termed the information bottleneck
method is used to design the corresponding lookup tables. In this paper, we
extend this decoding principle from binary to non-binary codes. This is not a
straightforward extension, but requires a more sophisticated lookup table
design to cope with the arithmetic in higher order Galois fields. Provided bit
error rate simulations show that our proposed scheme outperforms the log-max
decoding algorithm and operates close to sum-product decoding.Comment: This paper has been presented at IEEE International Conference on
Communications (ICC'19) in Shangha
Improved scaling of Time-Evolving Block-Decimation algorithm through Reduced-Rank Randomized Singular Value Decomposition
When the amount of entanglement in a quantum system is limited, the relevant
dynamics of the system is restricted to a very small part of the state space.
When restricted to this subspace the description of the system becomes
efficient in the system size. A class of algorithms, exemplified by the
Time-Evolving Block-Decimation (TEBD) algorithm, make use of this observation
by selecting the relevant subspace through a decimation technique relying on
the Singular Value Decomposition (SVD). In these algorithms, the complexity of
each time-evolution step is dominated by the SVD. Here we show that, by
applying a randomized version of the SVD routine (RRSVD), the power law
governing the computational complexity of TEBD is lowered by one degree,
resulting in a considerable speed-up. We exemplify the potential gains in
efficiency at the hand of some real world examples to which TEBD can be
successfully applied to and demonstrate that for those system RRSVD delivers
results as accurate as state-of-the-art deterministic SVD routines.Comment: 14 pages, 5 figure
Generalization Error in Deep Learning
Deep learning models have lately shown great performance in various fields
such as computer vision, speech recognition, speech translation, and natural
language processing. However, alongside their state-of-the-art performance, it
is still generally unclear what is the source of their generalization ability.
Thus, an important question is what makes deep neural networks able to
generalize well from the training set to new data. In this article, we provide
an overview of the existing theory and bounds for the characterization of the
generalization error of deep neural networks, combining both classical and more
recent theoretical and empirical results
- …