333,017 research outputs found

    When is a bottleneck a bottleneck?

    Full text link
    Bottlenecks, i.e. local reductions of capacity, are one of the most relevant scenarios of traffic systems. The asymmetric simple exclusion process (ASEP) with a defect is a minimal model for such a bottleneck scenario. One crucial question is "What is the critical strength of the defect that is required to create global effects, i.e. traffic jams localized at the defect position". Intuitively one would expect that already an arbitrarily small bottleneck strength leads to global effects in the system, e.g. a reduction of the maximal current. Therefore it came as a surprise when, based on computer simulations, it was claimed that the reaction of the system depends in non-continuous way on the defect strength and weak defects do not have a global influence on the system. Here we reconcile intuition and simulations by showing that indeed the critical defect strength is zero. We discuss the implications for the analysis of empirical and numerical data.Comment: 8 pages, to appear in the proceedings of Traffic and Granular Flow '1

    Nonlinear Information Bottleneck

    Full text link
    Information bottleneck (IB) is a technique for extracting information in one random variable XX that is relevant for predicting another random variable YY. IB works by encoding XX in a compressed "bottleneck" random variable MM from which YY can be accurately decoded. However, finding the optimal bottleneck variable involves a difficult optimization problem, which until recently has been considered for only two limited cases: discrete XX and YY with small state spaces, and continuous XX and YY with a Gaussian joint distribution (in which case optimal encoding and decoding maps are linear). We propose a method for performing IB on arbitrarily-distributed discrete and/or continuous XX and YY, while allowing for nonlinear encoding and decoding maps. Our approach relies on a novel non-parametric upper bound for mutual information. We describe how to implement our method using neural networks. We then show that it achieves better performance than the recently-proposed "variational IB" method on several real-world datasets

    Closed queueing networks under congestion: non-bottleneck independence and bottleneck convergence

    Get PDF
    We analyze the behavior of closed product-form queueing networks when the number of customers grows to infinity and remains proportionate on each route (or class). First, we focus on the stationary behavior and prove the conjecture that the stationary distribution at non-bottleneck queues converges weakly to the stationary distribution of an ergodic, open product-form queueing network. This open network is obtained by replacing bottleneck queues with per-route Poissonian sources whose rates are determined by the solution of a strictly concave optimization problem. Then, we focus on the transient behavior of the network and use fluid limits to prove that the amount of fluid, or customers, on each route eventually concentrates on the bottleneck queues only, and that the long-term proportions of fluid in each route and in each queue solve the dual of the concave optimization problem that determines the throughputs of the previous open network.Comment: 22 page

    The information bottleneck method

    Get PDF
    We define the relevant information in a signal xXx\in X as being the information that this signal provides about another signal y\in \Y. Examples include the information that face images provide about the names of the people portrayed, or the information that speech sounds provide about the words spoken. Understanding the signal xx requires more than just predicting yy, it also requires specifying which features of \X play a role in the prediction. We formalize this problem as that of finding a short code for \X that preserves the maximum information about \Y. That is, we squeeze the information that \X provides about \Y through a `bottleneck' formed by a limited set of codewords \tX. This constrained optimization problem can be seen as a generalization of rate distortion theory in which the distortion measure d(x,\x) emerges from the joint statistics of \X and \Y. This approach yields an exact set of self consistent equations for the coding rules X \to \tX and \tX \to \Y. Solutions to these equations can be found by a convergent re-estimation method that generalizes the Blahut-Arimoto algorithm. Our variational principle provides a surprisingly rich framework for discussing a variety of problems in signal processing and learning, as will be described in detail elsewhere
    corecore