2,587 research outputs found

    Gravitational Wave Oscillations in Bigravity

    Full text link
    We derive consistent equations for gravitational wave oscillations in bigravity. In this framework a second dynamical tensor field is introduced in addition to General Relativity and coupled such that one massless and one massive linear combination arise. Only one of the two tensors is the physical metric coupling to matter, and thus the basis in which gravitational waves propagate is different from the basis where the wave is produced and detected. Therefore, one should expect -- in analogy to neutrino oscillations -- to observe an oscillatory behavior. We show how this behavior arises explicitly, discuss phenomenological implications and present new limits on the graviton parameter space in bigravity.Comment: 6 pages, 3 figures, journal versio

    Decoherence of Gravitational Wave Oscillations in Bigravity

    Full text link
    Following up on our recent study, we consider the regime of graviton masses and gravitational wave propagation distances at which decoherence of the wave packets plays a major role for phenomenology. This regime is of particular interest, as it can lead to very striking phenomena of echo events in the gravitational waves coming from coalescence events. The power of the experimental search in this case lies in the fact that it becomes sensitive to a large range of graviton masses, while not relying on a specific production mechanism. We are thus able to place new relevant limits on the parameter space of the graviton mixing angle.Comment: 7 pages, 2 figures; v2: extended discussion on the importance of the Vainshtein mechanism, content matches published versio

    The Simulator: Understanding Adaptive Sampling in the Moderate-Confidence Regime

    Full text link
    We propose a novel technique for analyzing adaptive sampling called the {\em Simulator}. Our approach differs from the existing methods by considering not how much information could be gathered by any fixed sampling strategy, but how difficult it is to distinguish a good sampling strategy from a bad one given the limited amount of data collected up to any given time. This change of perspective allows us to match the strength of both Fano and change-of-measure techniques, without succumbing to the limitations of either method. For concreteness, we apply our techniques to a structured multi-arm bandit problem in the fixed-confidence pure exploration setting, where we show that the constraints on the means imply a substantial gap between the moderate-confidence sample complexity, and the asymptotic sample complexity as δ→0\delta \to 0 found in the literature. We also prove the first instance-based lower bounds for the top-k problem which incorporate the appropriate log-factors. Moreover, our lower bounds zero-in on the number of times each \emph{individual} arm needs to be pulled, uncovering new phenomena which are drowned out in the aggregate sample complexity. Our new analysis inspires a simple and near-optimal algorithm for the best-arm and top-k identification, the first {\em practical} algorithm of its kind for the latter problem which removes extraneous log factors, and outperforms the state-of-the-art in experiments

    Bayesian Dark Knowledge

    Get PDF
    We consider the problem of Bayesian parameter estimation for deep neural networks, which is important in problem settings where we may have little data, and/ or where we need accurate posterior predictive densities, e.g., for applications involving bandits or active learning. One simple approach to this is to use online Monte Carlo methods, such as SGLD (stochastic gradient Langevin dynamics). Unfortunately, such a method needs to store many copies of the parameters (which wastes memory), and needs to make predictions using many versions of the model (which wastes time). We describe a method for "distilling" a Monte Carlo approximation to the posterior predictive density into a more compact form, namely a single deep neural network. We compare to two very recent approaches to Bayesian neural networks, namely an approach based on expectation propagation [Hernandez-Lobato and Adams, 2015] and an approach based on variational Bayes [Blundell et al., 2015]. Our method performs better than both of these, is much simpler to implement, and uses less computation at test time.Comment: final version submitted to NIPS 201

    Getting Started with Digital Preservation: Initial Steps

    Get PDF

    Waiting to Vote : Racial Disparities in Election Day Experiences

    Get PDF
    For this report, we analyzed data from two nationwide election surveys regarding the 2018 election: the Cooperative Congressional Election Study, a 60,000-person survey on Election Day experiences, and the U.S. Election Assistance Commission's Election Administration and Voting Survey, which asks administrators detailed questions about how they conduct elections. We also interviewed nearly three dozen state and local election administrators. Further, we examined the electoral statutes on the books in every state in the nation to understand the sources of disparate wait times in 2018 and develop policy recommendations for lawmakers and election officials ahead of 2020. Some previous research has investigated the relationship between wait times and electoral resources — specifically polling places, voting machines, and poll workers. 8 But no prior study has examined the relationship on a nationwide scale
    • …
    corecore