13,737 research outputs found

    How can Francis Bacon help forensic science? The four idols of human biases

    Get PDF
    Much debate has focused on whether forensic science is indeed a science. This paper is not aimed at answering, or even trying to contribute to, this question. Rather, in this paper I try to find ways to improve forensic science by identifying potential vulnerabilities. To this end I use Francis Bacon's doctrine of idols which distinguishes between different types of human biases that may prevent scientific and objective inquiry. Bacon’s doctrine contains four sources for such biases: Idols Tribus (of the 'tribe'), Idols Specus (of the 'den'/'cave'), Idols Fori (of the 'market'), and Idols Theatre (of the 'theatre'). While his 400 year old doctrine does not, of course, perfectly match up with our current world view, it still provides a productive framework for examining and cataloguing some of the potential weaknesses and limitations in our current approach to forensic science

    Bergman interpolation on finite Riemann surfaces. Part I: Asymptotically Flat Case

    Full text link
    We study the Bergman space interpolation problem of open Riemann surfaces obtained from a compact Riemann surface by removing a finite number of points. We equip such a surface with what we call an asymptotically flat conformal metric, i.e., a complete metric with zero curvature outside a compact subset. We then establish necessary and sufficient conditions for interpolation in weighted Bergman spaces over asymptotically flat Riemann surfaces.Comment: The main result has been corrected: Sequences of density <1 are still interpolating, but the density of an interpolation sequence is only shown to be at most 1. The corrected result is sharp, by work of Borichev-Lyubarskii. Also added a motivating section on Shapiro-Shields interpolation. Otherwise typos and minor errors corrected. To appear in Journal d'Analys

    An MCMC Approach to Universal Lossy Compression of Analog Sources

    Full text link
    Motivated by the Markov chain Monte Carlo (MCMC) approach to the compression of discrete sources developed by Jalali and Weissman, we propose a lossy compression algorithm for analog sources that relies on a finite reproduction alphabet, which grows with the input length. The algorithm achieves, in an appropriate asymptotic sense, the optimum Shannon theoretic tradeoff between rate and distortion, universally for stationary ergodic continuous amplitude sources. We further propose an MCMC-based algorithm that resorts to a reduced reproduction alphabet when such reduction does not prevent achieving the Shannon limit. The latter algorithm is advantageous due to its reduced complexity and improved rates of convergence when employed on sources with a finite and small optimum reproduction alphabet.Comment: 21 pages, submitted for publicatio

    A Universal Parallel Two-Pass MDL Context Tree Compression Algorithm

    Full text link
    Computing problems that handle large amounts of data necessitate the use of lossless data compression for efficient storage and transmission. We present a novel lossless universal data compression algorithm that uses parallel computational units to increase the throughput. The length-NN input sequence is partitioned into BB blocks. Processing each block independently of the other blocks can accelerate the computation by a factor of BB, but degrades the compression quality. Instead, our approach is to first estimate the minimum description length (MDL) context tree source underlying the entire input, and then encode each of the BB blocks in parallel based on the MDL source. With this two-pass approach, the compression loss incurred by using more parallel units is insignificant. Our algorithm is work-efficient, i.e., its computational complexity is O(N/B)O(N/B). Its redundancy is approximately Blog(N/B)B\log(N/B) bits above Rissanen's lower bound on universal compression performance, with respect to any context tree source whose maximal depth is at most log(N/B)\log(N/B). We improve the compression by using different quantizers for states of the context tree based on the number of symbols corresponding to those states. Numerical results from a prototype implementation suggest that our algorithm offers a better trade-off between compression and throughput than competing universal data compression algorithms.Comment: Accepted to Journal of Selected Topics in Signal Processing special issue on Signal Processing for Big Data (expected publication date June 2015). 10 pages double column, 6 figures, and 2 tables. arXiv admin note: substantial text overlap with arXiv:1405.6322. Version: Mar 2015: Corrected a typ
    corecore