6,825 research outputs found

    Spatial coherence and stability in a disordered organic polariton condensate

    Full text link
    Although only a handful of organic materials have shown polariton condensation, their study is rapidly becoming more accessible. The spontaneous appearance of long-range spatial coherence is often recognized as a defining feature of such condensates. In this work, we study the emergence of spatial coherence in an organic microcavity and demonstrate a number of unique features stemming from the peculiarities of this material set. Despite its disordered nature, we find that correlations extend over the entire spot size and we measure g(1)(r,r)g^{(1)}(r,r') values of nearly unity at short distances and of 50% for points separated by nearly 10 μ\mum. We show that for large spots, strong shot to shot fluctuations emerge as varying phase gradients and defects, including the spontaneous formation of vortices. These are consistent with the presence of modulation instabilities. Furthermore, we find that measurements with flat-top spots are significantly influenced by disorder and can, in some cases, lead to the formation of mutually incoherent localized condensates.Comment: Revised versio

    Approximating Nash Equilibria in Tree Polymatrix Games

    Get PDF
    We develop a quasi-polynomial time Las Vegas algorithm for approximating Nash equilibria in polymatrix games over trees, under a mild renormalizing assumption. Our result, in particular, leads to an expected polynomial-time algorithm for computing approximate Nash equilibria of tree polymatrix games in which the number of actions per player is a fixed constant. Further, for trees with constant degree, the running time of the algorithm matches the best known upper bound for approximating Nash equilibria in bimatrix games (Lipton, Markakis, and Mehta 2003). Notably, this work closely complements the hardness result of Rubinstein (2015), which establishes the inapproximability of Nash equilibria in polymatrix games over constant-degree bipartite graphs with two actions per player

    Settling Some Open Problems on 2-Player Symmetric Nash Equilibria

    Full text link
    Over the years, researchers have studied the complexity of several decision versions of Nash equilibrium in (symmetric) two-player games (bimatrix games). To the best of our knowledge, the last remaining open problem of this sort is the following; it was stated by Papadimitriou in 2007: find a non-symmetric Nash equilibrium (NE) in a symmetric game. We show that this problem is NP-complete and the problem of counting the number of non-symmetric NE in a symmetric game is #P-complete. In 2005, Kannan and Theobald defined the "rank of a bimatrix game" represented by matrices (A, B) to be rank(A+B) and asked whether a NE can be computed in rank 1 games in polynomial time. Observe that the rank 0 case is precisely the zero sum case, for which a polynomial time algorithm follows from von Neumann's reduction of such games to linear programming. In 2011, Adsul et. al. obtained an algorithm for rank 1 games; however, it does not solve the case of symmetric rank 1 games. We resolve this problem

    How Good are Low-Rank Approximations in Gaussian Process Regression?

    Get PDF
    We provide guarantees for approximate Gaussian Process (GP) regression resulting from two common low-rank kernel approximations: based on random Fourier features, and based on truncating the kernel's Mercer expansion. In particular, we bound the Kullback–Leibler divergence between an exact GP and one resulting from one of the afore-described low-rank approximations to its kernel, as well as between their corresponding predictive densities, and we also bound the error between predictive mean vectors and between predictive covariance matrices computed using the exact versus using the approximate GP. We provide experiments on both simulated data and standard benchmarks to evaluate the effectiveness of our theoretical bounds

    Scalable Gaussian Processes, with Guarantees: Kernel Approximations and Deep Feature Extraction

    Get PDF
    We provide approximation guarantees for a linear-time inferential framework for Gaussian processes, using two low-rank kernel approximations based on random Fourier features and truncation of Mercer expansions. In particular, we bound the Kullback-Leibler divergence between the idealized Gaussian process and the one resulting from a low-rank approximation to its kernel. Additionally, we present strong evidence that these two approximations, enhanced by an initial automatic feature extraction through deep neural networks, outperform a broad range of state-of-the-art methods in terms of time efficiency, negative log-predictive density, and root mean squared error

    Learning kk-Modal Distributions via Testing

    Get PDF
    A kk-modal probability distribution over the discrete domain {1,...,n}\{1,...,n\} is one whose histogram has at most kk "peaks" and "valleys." Such distributions are natural generalizations of monotone (k=0k=0) and unimodal (k=1k=1) probability distributions, which have been intensively studied in probability theory and statistics. In this paper we consider the problem of \emph{learning} (i.e., performing density estimation of) an unknown kk-modal distribution with respect to the L1L_1 distance. The learning algorithm is given access to independent samples drawn from an unknown kk-modal distribution pp, and it must output a hypothesis distribution p^\widehat{p} such that with high probability the total variation distance between pp and p^\widehat{p} is at most ϵ.\epsilon. Our main goal is to obtain \emph{computationally efficient} algorithms for this problem that use (close to) an information-theoretically optimal number of samples. We give an efficient algorithm for this problem that runs in time poly(k,log(n),1/ϵ)\mathrm{poly}(k,\log(n),1/\epsilon). For kO~(logn)k \leq \tilde{O}(\log n), the number of samples used by our algorithm is very close (within an O~(log(1/ϵ))\tilde{O}(\log(1/\epsilon)) factor) to being information-theoretically optimal. Prior to this work computationally efficient algorithms were known only for the cases k=0,1k=0,1 \cite{Birge:87b,Birge:97}. A novel feature of our approach is that our learning algorithm crucially uses a new algorithm for \emph{property testing of probability distributions} as a key subroutine. The learning algorithm uses the property tester to efficiently decompose the kk-modal distribution into kk (near-)monotone distributions, which are easier to learn.Comment: 28 pages, full version of SODA'12 paper, to appear in Theory of Computin

    Learning k

    Full text link
    A k-modal probability distribution over the domain {1,..., n} is one whose histogram has at most k "peaks" and "valleys." Such distributions are natural generalizations of monotone (k = 0) and unimodal (k = 1) probability distributions, which have been intensively studied in probability theory and statistics. In this paper we consider the problem of learning an unknown k-modal distribution. The learning algorithm is given access to independent samples drawn from the k-modal distribution p, and must output a hypothesis distribution p such that with high probability the total variation distance between p and p is at most ε. We give an efficient algorithm for this problem that runs in time poly(k, log(n), 1/ε). For k ≤ Õ(√ log n), the number of samples used by our algorithm is very close (within an Õ(log(1/ε)) factor) to being information-theoretically optimal. Prior to this work computationally efficient algorithms were known only for the cases k = 0, 1 [Bir87b, Bir97]. A novel feature of our approach is that our learning algorithm crucially uses a new property testing algorithm as a key subroutine. The learning algorithm uses the property tester to efficiently decompose the k-modal distribution into k (near)-monotone distributions, which are easier to learn.National Science Foundation (U.S.) (CAREER Award CCF-0953960)Alfred P. Sloan Foundation (Fellowship

    Braess's Paradox in Wireless Networks: The Danger of Improved Technology

    Full text link
    When comparing new wireless technologies, it is common to consider the effect that they have on the capacity of the network (defined as the maximum number of simultaneously satisfiable links). For example, it has been shown that giving receivers the ability to do interference cancellation, or allowing transmitters to use power control, never decreases the capacity and can in certain cases increase it by Ω(log(ΔPmax))\Omega(\log (\Delta \cdot P_{\max})), where Δ\Delta is the ratio of the longest link length to the smallest transmitter-receiver distance and PmaxP_{\max} is the maximum transmission power. But there is no reason to expect the optimal capacity to be realized in practice, particularly since maximizing the capacity is known to be NP-hard. In reality, we would expect links to behave as self-interested agents, and thus when introducing a new technology it makes more sense to compare the values reached at game-theoretic equilibria than the optimum values. In this paper we initiate this line of work by comparing various notions of equilibria (particularly Nash equilibria and no-regret behavior) when using a supposedly "better" technology. We show a version of Braess's Paradox for all of them: in certain networks, upgrading technology can actually make the equilibria \emph{worse}, despite an increase in the capacity. We construct instances where this decrease is a constant factor for power control, interference cancellation, and improvements in the SINR threshold (β\beta), and is Ω(logΔ)\Omega(\log \Delta) when power control is combined with interference cancellation. However, we show that these examples are basically tight: the decrease is at most O(1) for power control, interference cancellation, and improved β\beta, and is at most O(logΔ)O(\log \Delta) when power control is combined with interference cancellation

    Testing k-Modal Distributions: Optimal Algorithms via Reductions

    Get PDF
    We give highly efficient algorithms, and almost matching lower bounds, for a range of basic statistical problems that involve testing and estimating the L[subscript 1] (total variation) distance between two k-modal distributions p and q over the discrete domain {1, …, n}. More precisely, we consider the following four problems: given sample access to an unknown k-modal distribution p, Testing identity to a known or unknown distribution: 1. Determine whether p = q (for an explicitly given k-modal distribution q) versus p is e-far from q; 2. Determine whether p = q (where q is available via sample access) versus p is ε-far from q; Estimating L[subscript 1] distance (“tolerant testing”) against a known or unknown distribution: 3. Approximate d[subscript TV](p, q) to within additive ε where q is an explicitly given k-modal distribution q; 4. Approximate d[subscript TV] (p, q) to within additive ε where q is available via sample access. For each of these four problems we give sub-logarithmic sample algorithms, and show that our algorithms have optimal sample complexity up to additive poly (k) and multiplicative polylog log n + polylogk factors. Our algorithms significantly improve the previous results of [BKR04], which were for testing identity of distributions (items (1) and (2) above) in the special cases k = 0 (monotone distributions) and k = 1 (unimodal distributions) and required O((log n)[superscript 3]) samples. As our main conceptual contribution, we introduce a new reduction-based approach for distribution-testing problems that lets us obtain all the above results in a unified way. Roughly speaking, this approach enables us to transform various distribution testing problems for k-modal distributions over {1, …, n} to the corresponding distribution testing problems for unrestricted distributions over a much smaller domain {1, …, ℓ} where ℓ = O(k log n).National Science Foundation (U.S.) (CAREER Award CCF-0953960)Alfred P. Sloan Foundation (Fellowship
    corecore