1,337 research outputs found

    Relativistic Disk Reflection in the Neutron Star X-ray Binary XTE J1709-267 with NuSTAR

    Get PDF
    We perform the first reflection study of the soft X-ray transient and Type 1 burst source XTE J1709-267 using NuSTAR observations during its 2016 June outburst. There was an increase in flux near the end of the observations, which corresponds to an increase from \sim0.04 LEdd_{\mathrm{Edd}} to \sim0.06 LEdd_{\mathrm{Edd}} assuming a distance of 8.5 kpc. We have separately examined spectra from the low and high flux intervals, which were soft and show evidence of a broad Fe K line. Fits to these intervals with relativistic disk reflection models have revealed an inner disk radius of 13.81.8+3.0 Rg13.8_{-1.8}^{+3.0}\ R_{g} (where Rg=GM/c2R_{g} = GM/c^{2}) for the low flux spectrum and 23.45.4+15.6 Rg23.4_{-5.4}^{+15.6}\ R_{g} for the high flux spectrum at the 90\% confidence level. The disk is likely truncated by a boundary layer surrounding the neutron star or the magnetosphere. Based on the measured luminosity and using the accretion efficiency for a disk around a neutron star, we estimate that the theoretically expected size for the boundary layer would be 0.91.1 Rg\sim0.9-1.1 \ R_{g} from the neutron star's surface, which can be increased by spin or viscosity effects. Another plausible scenario is that the disk could be truncated by the magnetosphere. We place a conservative upper limit on the strength of the magnetic field at the poles, assuming a=0a_{*}=0 and MNS=1.4 MM_{NS}=1.4\ M_{\odot}, of B0.753.70×109B\leq0.75-3.70\times10^{9} G, though X-ray pulsations have not been detected from this source.Comment: Accepted for publication in ApJ, 5 pages, 4 figures, 1 table. arXiv admin note: text overlap with arXiv:1701.0177

    Prediction and explanation in the multiverse

    Get PDF
    Probabilities in the multiverse can be calculated by assuming that we are typical representatives in a given reference class. But is this class well defined? What should be included in the ensemble in which we are supposed to be typical? There is a widespread belief that this question is inherently vague, and that there are various possible choices for the types of reference objects which should be counted in. Here we argue that the ``ideal'' reference class (for the purpose of making predictions) can be defined unambiguously in a rather precise way, as the set of all observers with identical information content. When the observers in a given class perform an experiment, the class branches into subclasses who learn different information from the outcome of that experiment. The probabilities for the different outcomes are defined as the relative numbers of observers in each subclass. For practical purposes, wider reference classes can be used, where we trace over all information which is uncorrelated to the outcome of the experiment, or whose correlation with it is beyond our current understanding. We argue that, once we have gathered all practically available evidence, the optimal strategy for making predictions is to consider ourselves typical in any reference class we belong to, unless we have evidence to the contrary. In the latter case, the class must be correspondingly narrowed.Comment: Minor clarifications adde

    Anthropic reasoning in multiverse cosmology and string theory

    Get PDF
    Anthropic arguments in multiverse cosmology and string theory rely on the weak anthropic principle (WAP). We show that the principle, though ultimately a tautology, is nevertheless ambiguous. It can be reformulated in one of two unambiguous ways, which we refer to as WAP_1 and WAP_2. We show that WAP_2, the version most commonly used in anthropic reasoning, makes no physical predictions unless supplemented by a further assumption of "typicality", and we argue that this assumption is both misguided and unjustified. WAP_1, however, requires no such supplementation; it directly implies that any theory that assigns a non-zero probability to our universe predicts that we will observe our universe with probability one. We argue, therefore, that WAP_1 is preferable, and note that it has the benefit of avoiding the inductive overreach characteristic of much anthropic reasoning.Comment: 7 pages. Expanded discussion of selection effects and some minor clarifications, as publishe

    An Infrared Divergence Problem in the cosmological measure theory and the anthropic reasoning

    Full text link
    An anthropic principle has made it possible to answer the difficult question of why the observable value of cosmological constant (Λ1047\Lambda\sim 10^{-47} GeV4{}^4) is so disconcertingly tiny compared to predicted value of vacuum energy density ρSUSY1012\rho_{SUSY}\sim 10^{12} GeV4{}^4. Unfortunately, there is a darker side to this argument, as it consequently leads to another absurd prediction: that the probability to observe the value Λ=0\Lambda=0 for randomly selected observer exactly equals to 1. We'll call this controversy an infrared divergence problem. It is shown that the IRD prediction can be avoided with the help of a Linde-Vanchurin {\em singular runaway measure} coupled with the calculation of relative Bayesian probabilities by the means of the {\em doomsday argument}. Moreover, it is shown that while the IRD problem occurs for the {\em prediction stage} of value of Λ\Lambda, it disappears at the {\em explanatory stage} when Λ\Lambda has already been measured by the observer.Comment: 9 pages, RevTe

    AAAI: an Argument Against Artificial Intelligence

    Get PDF
    The ethical concerns regarding the successful development of an Artificial Intelligence have received a lot of attention lately. The idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of an AI causing extreme human suffering is important enough to warrant serious consideration. Others look at this problem from the opposite perspective, namely that of the AI itself. Here the idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of humanity causing extreme suffering to an AI is important enough to warrant serious consideration. This paper starts from the observation that both concerns rely on problematic philosophical assumptions. Rather than tackling these assumptions directly, it proceeds to present an argument that if one takes these assumptions seriously, then one has a moral obligation to advocate for a ban on the development of a conscious AI

    TRANSHUMANISM AND MORAL EQUALITY

    Get PDF
    Conservative thinkers such as Francis Fukuyama have produced a battery of objections to the transhumanist project of fundamentally enhancing human capacities. This article examines one of these objections, namely that by allowing some to greatly extend their capacities, we will undermine the fundamental moral equality of human beings. I argue that this objection is groundless: once we understand the basis for human equality, it is clear that anyone who now has sufficient capacities to count as a person from the moral point of view will continue to count as one even if others are fundamentally enhanced; and it is mistaken to think that a creature which had even far greater capacities than an unenhanced human being should count as more than an equal from the moral point of view

    Self-Modification of Policy and Utility Function in Rational Agents

    Full text link
    Any agent that is part of the environment it interacts with and has versatile actuators (such as arms and fingers), will in principle have the ability to self-modify -- for example by changing its own source code. As we continue to create more and more intelligent agents, chances increase that they will learn about this ability. The question is: will they want to use it? For example, highly intelligent systems may find ways to change their goals to something more easily achievable, thereby `escaping' the control of their designers. In an important paper, Omohundro (2008) argued that goal preservation is a fundamental drive of any intelligent system, since a goal is more likely to be achieved if future versions of the agent strive towards the same goal. In this paper, we formalise this argument in general reinforcement learning, and explore situations where it fails. Our conclusion is that the self-modification possibility is harmless if and only if the value function of the agent anticipates the consequences of self-modifications and use the current utility function when evaluating the future.Comment: Artificial General Intelligence (AGI) 201
    corecore