1,445 research outputs found

    The Principal Principle Implies the Principle of Indifference

    Get PDF
    We argue that David Lewis’s principal principle implies a version of the principle of indifference. The same is true for similar principles that need to appeal to the concept of admissibility. Such principles are thus in accord with objective Bayesianism, but in tension with subjective Bayesianism. 1 The Argument 2 Some Objections Me

    Sacred trust: the voluntary removal and reburial of human remains from a historic cemetery in Louisiana

    Get PDF
    Members of the Randolph family, a prominent plantation family that has lived in Louisiana since the late eighteenth century, contacted a team of anthropologists from Louisiana State University to help recover thirteen individuals from St. Mary’s Cemetery in Bayou Goula, Louisiana. The cemetery had been abandoned since 1970, its graves overgrown with weeds and desecrated by vandals. Of the thirteen individuals recovered, three sets of remains did not have associated grave markers. These three were taken back to the lab and analyzed using standard forensic procedures. Documentary research on the history of the cemetery, the once associated church, and the Randolph family provided important context for excavation. This thesis presents the project in its entirety with the hope that it will provide a helpful blueprint for both anthropologists and family members who might find themselves involved in the rescue of ancestral remains from historic cemeteries

    Risk Measures and Upper Probabilities: Coherence and Stratification

    Full text link
    Machine learning typically presupposes classical probability theory which implies that aggregation is built upon expectation. There are now multiple reasons to motivate looking at richer alternatives to classical probability theory as a mathematical foundation for machine learning. We systematically examine a powerful and rich class of alternative aggregation functionals, known variously as spectral risk measures, Choquet integrals or Lorentz norms. We present a range of characterization results, and demonstrate what makes this spectral family so special. In doing so we arrive at a natural stratification of all coherent risk measures in terms of the upper probabilities that they induce by exploiting results from the theory of rearrangement invariant Banach spaces. We empirically demonstrate how this new approach to uncertainty helps tackling practical machine learning problems

    Tailoring to the Tails: Risk Measures for Fine-Grained Tail Sensitivity

    Full text link
    Expected risk minimization (ERM) is at the core of many machine learning systems. This means that the risk inherent in a loss distribution is summarized using a single number - its average. In this paper, we propose a general approach to construct risk measures which exhibit a desired tail sensitivity and may replace the expectation operator in ERM. Our method relies on the specification of a reference distribution with a desired tail behaviour, which is in a one-to-one correspondence to a coherent upper probability. Any risk measure, which is compatible with this upper probability, displays a tail sensitivity which is finely tuned to the reference distribution. As a concrete example, we focus on divergence risk measures based on f-divergence ambiguity sets, which are a widespread tool used to foster distributional robustness of machine learning systems. For instance, we show how ambiguity sets based on the Kullback-Leibler divergence are intricately tied to the class of subexponential random variables. We elaborate the connection of divergence risk measures and rearrangement invariant Banach norms.Comment: Made multiple minor edit

    The Principal Principle and subjective Bayesianism

    Get PDF
    This paper poses a problem for Lewis’ Principal Principle in a subjective Bayesian framework: we show that, where chances inform degrees of belief, subjective Bayesianism fails to validate normal informal standards of what is reasonable. This problem points to a tension between the Principal Principle and the claim that conditional degrees of belief are conditional probabilities. However, one version of objective Bayesianism has a straightforward resolution to this problem, because it avoids this latter claim. The problem, then, offers some support to this version of objective Bayesianism. We show in Section 1 that standard subjective Bayesianism has a problem in accommodating David Lewis’ Principal Principle. In Section 2, we see that the problem does not beset a recent version of objective Bayesianism. In Section 3, we consider three possible ways in which a subjectivist might try to avoid the problem but we argue that none of these suggestions succeed. We conclude that the problem favours objective Bayesianism over subjective Bayesianism (Section 4). In Section 4 we also compare our results to some other lines of recent work

    Dalek -- a deep-learning emulator for TARDIS

    Full text link
    Supernova spectral time series contain a wealth of information about the progenitor and explosion process of these energetic events. The modeling of these data requires the exploration of very high dimensional posterior probabilities with expensive radiative transfer codes. Even modest parametrizations of supernovae contain more than ten parameters and a detailed exploration demands at least several million function evaluations. Physically realistic models require at least tens of CPU minutes per evaluation putting a detailed reconstruction of the explosion out of reach of traditional methodology. The advent of widely available libraries for the training of neural networks combined with their ability to approximate almost arbitrary functions with high precision allows for a new approach to this problem. Instead of evaluating the radiative transfer model itself, one can build a neural network proxy trained on the simulations but evaluating orders of magnitude faster. Such a framework is called an emulator or surrogate model. In this work, we present an emulator for the TARDIS supernova radiative transfer code applied to Type Ia supernova spectra. We show that we can train an emulator for this problem given a modest training set of a hundred thousand spectra (easily calculable on modern supercomputers). The results show an accuracy on the percent level (that are dominated by the Monte Carlo nature of TARDIS and not the emulator) with a speedup of several orders of magnitude. This method has a much broader set of applications and is not limited to the presented problem.Comment: 6 pages;5 figures submitted to AAS Journals. Constructive Criticism invite

    Exploring PubMed as a reliable resource for scholarly communications services

    Get PDF
    Objective: PubMed’s provision of MEDLINE and other National Library of Medicine (NLM) resources has made it one of the most widely accessible biomedical resources globally. The growth of PubMed Central (PMC) and public access mandates have affected PubMed’s composition. The authors tested recent claims that content in PMC is of low quality and affects PubMed’s reliability, while exploring PubMed’s role in the current scholarly communications landscape. Methods: The percentage of MEDLINE-indexed records was assessed in PubMed and various subsets of records from PMC. Data were retrieved via the National Center for Biotechnology Information (NCBI) interface, and follow-up interviews with a PMC external reviewer and staff at NLM were conducted. Results: Almost all PubMed content (91%) is indexed in MEDLINE; however, since the launch of PMC, the percentage of PubMed records indexed in MEDLINE has slowly decreased. This trend is the result of an increase in PMC content from journals that are not indexed in MEDLINE and not a result of author manuscripts submitted to PMC in compliance with public access policies. Author manuscripts in PMC continue to be published in MEDLINE-indexed journals at a high rate (85%). The interviewees clarified the difference between the sources, with MEDLINE serving as a highly selective index of journals in biomedical literature and PMC serving as an open archive of quality biomedical and life sciences literature and a repository of funded research. Conclusion: The differing scopes of PMC and MEDLINE will likely continue to affect their overlap; however, quality control exists in the maintenance and facilitation of both resources, and funding from major grantors is a major component of quality assurance in PMC.  This article has been approved for the Medical Library Association’s Independent Reading Program
    • …
    corecore