1,400 research outputs found

    Stop-and-Frisk: A First Look at Six Months of Data on Stop-and-Frisk Practices in Newark

    Get PDF
    This study constitutes the first public analysis of stop-and-frisk practices in Newark. The study compares Newark to its close neighbor to the east, New York City. While six months of stop-and-frisk data is insufficient to draw definitive conclusions about the Newark Police Department's stop-and-frisk practices, the ACLU-NJ believes that the initial concerns raised by these data are strong enough to warrant corrective actions now. This study has three primary findings: 1) High volume of stop-and-frisks; 2) Black Newarkers bear the disproportionate brunt of stop-and-frisks; 3) The majority of people stopped are innocent. The study concludes with a series of recommendations for greater compliance with the Newark Police Department's Transparency Policy and for ensuring that stop-and-frisk abuses do not take place. An Appendix is also included with additional data on stop-and-frisk activities in Newark, including by precinct, age, and sex

    Exact and Approximate Determinization of Discounted-Sum Automata

    Get PDF
    A discounted-sum automaton (NDA) is a nondeterministic finite automaton with edge weights, valuing a run by the discounted sum of visited edge weights. More precisely, the weight in the i-th position of the run is divided by λi\lambda^i, where the discount factor λ\lambda is a fixed rational number greater than 1. The value of a word is the minimal value of the automaton runs on it. Discounted summation is a common and useful measuring scheme, especially for infinite sequences, reflecting the assumption that earlier weights are more important than later weights. Unfortunately, determinization of NDAs, which is often essential in formal verification, is, in general, not possible. We provide positive news, showing that every NDA with an integral discount factor is determinizable. We complete the picture by proving that the integers characterize exactly the discount factors that guarantee determinizability: for every nonintegral rational discount factor λ\lambda, there is a nondeterminizable λ\lambda-NDA. We also prove that the class of NDAs with integral discount factors enjoys closure under the algebraic operations min, max, addition, and subtraction, which is not the case for general NDAs nor for deterministic NDAs. For general NDAs, we look into approximate determinization, which is always possible as the influence of a word's suffix decays. We show that the naive approach, of unfolding the automaton computations up to a sufficient level, is doubly exponential in the discount factor. We provide an alternative construction for approximate determinization, which is singly exponential in the discount factor, in the precision, and in the number of states. We also prove matching lower bounds, showing that the exponential dependency on each of these three parameters cannot be avoided. All our results hold equally for automata over finite words and for automata over infinite words

    The Effect of Benefits Level on Take-up Rates: Evidence from a Natural Experiment

    Get PDF
    This paper exploits a quasi-natural experiment to study the effect of social benefits level on take-up rates. We find that households who are eligible for double benefits (twins) have much higher take-up rate - up to double - as compared to a control group of households. Our estimated effect of benefits level is much higher relative to the standard cross section estimates. This finding is less exposed to a selection bias that might plague much of the previous research on the link between benefits level and take-up. It provides strong empirical support for the level of benefits as a key factor in determining take-up rates.take-up, social benefits

    Low Take-up Rates: The Role of Information

    Get PDF
    This paper exploits a quasi-natural experiment to study the role of information in determining take-up patterns of social benefits in a non-stigma environment. We find that take-up rate of households who have the incentive to search for information for a longer period of time is between 8 and 13 percentage points higher as compared to a control group of households. This result is robust to the inclusion of various household characteristics. Our finding provides strong empirical support for information as an important explanation for low take-up rates.take-up, social benefits, information cost

    Finding Skewed Subcubes Under a Distribution

    Get PDF
    Say that we are given samples from a distribution ? over an n-dimensional space. We expect or desire ? to behave like a product distribution (or a k-wise independent distribution over its marginals for small k). We propose the problem of enumerating/list-decoding all large subcubes where the distribution ? deviates markedly from what we expect; we refer to such subcubes as skewed subcubes. Skewed subcubes are certificates of dependencies between small subsets of variables in ?. We motivate this problem by showing that it arises naturally in the context of algorithmic fairness and anomaly detection. In this work we focus on the special but important case where the space is the Boolean hypercube, and the expected marginals are uniform. We show that the obvious definition of skewed subcubes can lead to intractable list sizes, and propose a better definition of a minimal skewed subcube, which are subcubes whose skew cannot be attributed to a larger subcube that contains it. Our main technical contribution is a list-size bound for this definition and an algorithm to efficiently find all such subcubes. Both the bound and the algorithm rely on Fourier-analytic techniques, especially the powerful hypercontractive inequality. On the lower bounds side, we show that finding skewed subcubes is as hard as the sparse noisy parity problem, and hence our algorithms cannot be improved on substantially without a breakthrough on this problem which is believed to be intractable. Motivated by this, we study alternate models allowing query access to ? where finding skewed subcubes might be easier
    corecore