64 research outputs found

    Some comments on C. S. Wallace's random number generators

    Full text link
    We outline some of Chris Wallace's contributions to pseudo-random number generation. In particular, we consider his idea for generating normally distributed variates without relying on a source of uniform random numbers, and compare it with more conventional methods for generating normal random numbers. Implementations of Wallace's idea can be very fast (approximately as fast as good uniform generators). We discuss the statistical quality of the output, and mention how certain pitfalls can be avoided.Comment: 13 pages. For further information, see http://wwwmaths.anu.edu.au/~brent/pub/pub213.htm

    An Algorithmic Interpretation of Quantum Probability

    Get PDF
    The Everett (or relative-state, or many-worlds) interpretation of quantum mechanics has come under fire for inadequately dealing with the Born rule (the formula for calculating quantum probabilities). Numerous attempts have been made to derive this rule from the perspective of observers within the quantum wavefunction. These are not really analytic proofs, but are rather attempts to derive the Born rule as a synthetic a priori necessity, given the nature of human observers (a fact not fully appreciated even by all of those who have attempted such proofs). I show why existing attempts are unsuccessful or only partly successful, and postulate that Solomonoff's algorithmic approach to the interpretation of probability theory could clarify the problems with these approaches. The Sleeping Beauty probability puzzle is used as a springboard from which to deduce an objectivist, yet synthetic a priori framework for quantum probabilities, that properly frames the role of self-location and self-selection (anthropic) principles in probability theory. I call this framework "algorithmic synthetic unity" (or ASU). I offer no new formal proof of the Born rule, largely because I feel that existing proofs (particularly that of Gleason) are already adequate, and as close to being a formal proof as one should expect or want. Gleason's one unjustified assumption--known as noncontextuality--is, I will argue, completely benign when considered within the algorithmic framework that I propose. I will also argue that, to the extent the Born rule can be derived within ASU, there is no reason to suppose that we could not also derive all the other fundamental postulates of quantum theory, as well. There is nothing special here about the Born rule, and I suggest that a completely successful Born rule proof might only be possible once all the other postulates become part of the derivation. As a start towards this end, I show how we can already derive the essential content of the fundamental postulates of quantum mechanics, at least in outline, and especially if we allow some educated and well-motivated guesswork along the way. The result is some steps towards a coherent and consistent algorithmic interpretation of quantum mechanics

    Acta Scientiarum Mathematicarum : Tomus 56. Fasc. 1-2.

    Get PDF

    USA Power v. Pacificorp : Brief of Appellee

    Get PDF
    APPEAL FROM THE THIRD DISTRICT COURT, SALT LAKE COUNTY HON. TYRONE E. MEDLE

    The Ledger and Times, June 30, 1948

    Get PDF

    Eastern Progress - 12 Dec 1974

    Get PDF

    Disaster and the Restructuring of Organization

    Get PDF

    Red River Prospector, 07-06-1905

    Get PDF
    https://digitalrepository.unm.edu/rrp_news/1074/thumbnail.jp

    After postmodernism: ethical paradigms in contemporary American fiction

    Get PDF

    Stylistic atructures: a computational approach to text classification

    Get PDF
    The problem of authorship attribution has received attention both in the academic world (e.g. did Shakespeare or Marlowe write Edward III?) and outside (e.g. is this confession really the words of the accused or was it made up by someone else?). Previous studies by statisticians and literary scholars have sought "verbal habits" that characterize particular authors consistently. By and large, this has meant looking for distinctive rates of usage of specific marker words -- as in the classic study by Mosteller and Wallace of the Federalist Papers. The present study is based on the premiss that authorship attribution is just one type of text classification and that advances in this area can be made by applying and adapting techniques from the field of machine learning. Five different trainable text-classification systems are described, which differ from current stylometric practice in a number of ways, in particular by using a wider variety of marker patterns than customary and by seeking such markers automatically, without being told what to look for. A comparison of the strengths and weaknesses of these systems, when tested on a representative range of text-classification problems, confirms the importance of paying more attention than usual to alternative methods of representing distinctive differences between types of text. The thesis concludes with suggestions on how to make further progress towards the goal of a fully automatic, trainable text-classification system
    • …
    corecore