535 research outputs found
Solomonoff Induction Violates Nicod's Criterion
Nicod's criterion states that observing a black raven is evidence for the
hypothesis H that all ravens are black. We show that Solomonoff induction does
not satisfy Nicod's criterion: there are time steps in which observing black
ravens decreases the belief in H. Moreover, while observing any computable
infinite string compatible with H, the belief in H decreases infinitely often
when using the unnormalized Solomonoff prior, but only finitely often when
using the normalized Solomonoff prior. We argue that the fault is not with
Solomonoff induction; instead we should reject Nicod's criterion.Comment: ALT 201
Beyond âsignificanceâ:Principles and practice of the analysis of credibility
The inferential inadequacies of statistical significance testing are now widely recognized. There is, however, no consensus on how to move research into a âpost pâ<â0.05â era. We present a potential route forward via the Analysis of Credibility, a novel methodology that allows researchers to go beyond the simplistic dichotomy of significance testing and extract more insight from new findings. Using standard summary statistics, AnCred assesses the credibility of significant and non-significant findings on the basis of their evidential weight, and in the context of existing knowledge. The outcome is expressed in quantitative terms of direct relevance to the substantive research question, providing greater protection against misinterpretation. Worked examples are given to illustrate how AnCred extracts additional insight from the outcome of typical research study designs. Its ability to cast light on the use of p-values, the interpretation of non-significant findings and the so-called âreplication crisisâ is also discussed
Fast and reliable pricing of American options with local volatility
We present globally convergent multigrid methods for the nonsymmetric obstacle problems as arising from the discretization of BlackâScholes models of American options with local volatilities and discrete data. No tuning or regularization parameters occur. Our approach relies on symmetrization by transformation and data recovery by superconvergence
Cluster and virial expansions for the multi-species tonks gas
We consider a mixture of non-overlapping rods of different lengths âk moving in R or Z. Our main result are necessary and sufficient convergence criteria for the expansion of the pressure in terms of the activities zk and the densities Ïk. This provides an explicit example against which to test known cluster expansion criteria, and illustrates that for non-negative interactions, the virial expansion can converge in a domain much larger than the activity expansion. In addition, we give explicit formulas that generalize the well-known relation between non-overlapping rods and labelled rooted trees. We also prove that for certain choices of the activities, the system can undergo a condensation transition akin to that of the zero-range process. The key tool is a fixed point equation for the pressure
Critical review of the United Kingdomâs âgold standardâ survey of public attitudes to science
Since 2000, the UK government has funded surveys aimed at understanding the UK publicâs attitudes toward science, scientists, and science policy. Known as the Public Attitudes to Science series, these surveys and their predecessors have long been used in UK science communication policy, practice, and scholarship as a source of authoritative knowledge about science-related attitudes and behaviors. Given their importance and the significant public funding investment they represent, detailed academic scrutiny of the studies is needed. In this essay, we critically review the most recently published Public Attitudes to Science survey (2014), assessing the robustness of its methods and claims. The review casts doubt on the quality of key elements of the Public Attitudes to Science 2014 survey data and analysis while highlighting the importance of robust quantitative social research methodology. Our analysis comparing the main sample and booster sample for young people demonstrates that quota sampling cannot be assumed equivalent to probability-based sampling techniques
Measuring the intelligence of an idealized mechanical knowing agent
We define a notion of the intelligence level of an idealized mechanical knowing agent. This is motivated by efforts within artificial intelligence research to define real-number intelligence levels of compli- cated intelligent systems. Our agents are more idealized, which allows us to define a much simpler measure of intelligence level for them. In short, we define the intelligence level of a mechanical knowing agent to be the supremum of the computable ordinals that have codes the agent knows to be codes of computable ordinals. We prove that if one agent knows certain things about another agent, then the former necessarily has a higher intelligence level than the latter. This allows our intelligence no- tion to serve as a stepping stone to obtain results which, by themselves, are not stated in terms of our intelligence notion (results of potential in- terest even to readers totally skeptical that our notion correctly captures intelligence). As an application, we argue that these results comprise evidence against the possibility of intelligence explosion (that is, the no- tion that sufficiently intelligent machines will eventually be capable of designing even more intelligent machines, which can then design even more intelligent machines, and so on)
The Temporal Singularity: time-accelerated simulated civilizations and their implications
Provided significant future progress in artificial intelligence and
computing, it may ultimately be possible to create multiple Artificial General
Intelligences (AGIs), and possibly entire societies living within simulated
environments. In that case, it should be possible to improve the problem
solving capabilities of the system by increasing the speed of the simulation.
If a minimal simulation with sufficient capabilities is created, it might
manage to increase its own speed by accelerating progress in science and
technology, in a way similar to the Technological Singularity. This may
ultimately lead to large simulated civilizations unfolding at extreme temporal
speedups, achieving what from the outside would look like a Temporal
Singularity. Here we discuss the feasibility of the minimal simulation and the
potential advantages, dangers, and connection to the Fermi paradox of the
Temporal Singularity. The medium-term importance of the topic derives from the
amount of computational power required to start the process, which could be
available within the next decades, making the Temporal Singularity
theoretically possible before the end of the century.Comment: To appear in the conference proceedings of the AGI-18 conference
(published in the Springer's Lecture Notes in AI series
- âŠ