1,125 research outputs found

    Relativistic Disk Reflection in the Neutron Star X-ray Binary XTE J1709-267 with NuSTAR

    Get PDF
    We perform the first reflection study of the soft X-ray transient and Type 1 burst source XTE J1709-267 using NuSTAR observations during its 2016 June outburst. There was an increase in flux near the end of the observations, which corresponds to an increase from \sim0.04 LEdd_{\mathrm{Edd}} to \sim0.06 LEdd_{\mathrm{Edd}} assuming a distance of 8.5 kpc. We have separately examined spectra from the low and high flux intervals, which were soft and show evidence of a broad Fe K line. Fits to these intervals with relativistic disk reflection models have revealed an inner disk radius of 13.81.8+3.0 Rg13.8_{-1.8}^{+3.0}\ R_{g} (where Rg=GM/c2R_{g} = GM/c^{2}) for the low flux spectrum and 23.45.4+15.6 Rg23.4_{-5.4}^{+15.6}\ R_{g} for the high flux spectrum at the 90\% confidence level. The disk is likely truncated by a boundary layer surrounding the neutron star or the magnetosphere. Based on the measured luminosity and using the accretion efficiency for a disk around a neutron star, we estimate that the theoretically expected size for the boundary layer would be 0.91.1 Rg\sim0.9-1.1 \ R_{g} from the neutron star's surface, which can be increased by spin or viscosity effects. Another plausible scenario is that the disk could be truncated by the magnetosphere. We place a conservative upper limit on the strength of the magnetic field at the poles, assuming a=0a_{*}=0 and MNS=1.4 MM_{NS}=1.4\ M_{\odot}, of B0.753.70×109B\leq0.75-3.70\times10^{9} G, though X-ray pulsations have not been detected from this source.Comment: Accepted for publication in ApJ, 5 pages, 4 figures, 1 table. arXiv admin note: text overlap with arXiv:1701.0177

    Prediction and explanation in the multiverse

    Get PDF
    Probabilities in the multiverse can be calculated by assuming that we are typical representatives in a given reference class. But is this class well defined? What should be included in the ensemble in which we are supposed to be typical? There is a widespread belief that this question is inherently vague, and that there are various possible choices for the types of reference objects which should be counted in. Here we argue that the ``ideal'' reference class (for the purpose of making predictions) can be defined unambiguously in a rather precise way, as the set of all observers with identical information content. When the observers in a given class perform an experiment, the class branches into subclasses who learn different information from the outcome of that experiment. The probabilities for the different outcomes are defined as the relative numbers of observers in each subclass. For practical purposes, wider reference classes can be used, where we trace over all information which is uncorrelated to the outcome of the experiment, or whose correlation with it is beyond our current understanding. We argue that, once we have gathered all practically available evidence, the optimal strategy for making predictions is to consider ourselves typical in any reference class we belong to, unless we have evidence to the contrary. In the latter case, the class must be correspondingly narrowed.Comment: Minor clarifications adde

    Anthropic reasoning in multiverse cosmology and string theory

    Get PDF
    Anthropic arguments in multiverse cosmology and string theory rely on the weak anthropic principle (WAP). We show that the principle, though ultimately a tautology, is nevertheless ambiguous. It can be reformulated in one of two unambiguous ways, which we refer to as WAP_1 and WAP_2. We show that WAP_2, the version most commonly used in anthropic reasoning, makes no physical predictions unless supplemented by a further assumption of "typicality", and we argue that this assumption is both misguided and unjustified. WAP_1, however, requires no such supplementation; it directly implies that any theory that assigns a non-zero probability to our universe predicts that we will observe our universe with probability one. We argue, therefore, that WAP_1 is preferable, and note that it has the benefit of avoiding the inductive overreach characteristic of much anthropic reasoning.Comment: 7 pages. Expanded discussion of selection effects and some minor clarifications, as publishe

    An Infrared Divergence Problem in the cosmological measure theory and the anthropic reasoning

    Full text link
    An anthropic principle has made it possible to answer the difficult question of why the observable value of cosmological constant (Λ1047\Lambda\sim 10^{-47} GeV4{}^4) is so disconcertingly tiny compared to predicted value of vacuum energy density ρSUSY1012\rho_{SUSY}\sim 10^{12} GeV4{}^4. Unfortunately, there is a darker side to this argument, as it consequently leads to another absurd prediction: that the probability to observe the value Λ=0\Lambda=0 for randomly selected observer exactly equals to 1. We'll call this controversy an infrared divergence problem. It is shown that the IRD prediction can be avoided with the help of a Linde-Vanchurin {\em singular runaway measure} coupled with the calculation of relative Bayesian probabilities by the means of the {\em doomsday argument}. Moreover, it is shown that while the IRD problem occurs for the {\em prediction stage} of value of Λ\Lambda, it disappears at the {\em explanatory stage} when Λ\Lambda has already been measured by the observer.Comment: 9 pages, RevTe

    Sequential Extensions of Causal and Evidential Decision Theory

    Full text link
    Moving beyond the dualistic view in AI where agent and environment are separated incurs new challenges for decision making, as calculation of expected utility is no longer straightforward. The non-dualistic decision theory literature is split between causal decision theory and evidential decision theory. We extend these decision algorithms to the sequential setting where the agent alternates between taking actions and observing their consequences. We find that evidential decision theory has two natural extensions while causal decision theory only has one.Comment: ADT 201

    Self-Modification of Policy and Utility Function in Rational Agents

    Full text link
    Any agent that is part of the environment it interacts with and has versatile actuators (such as arms and fingers), will in principle have the ability to self-modify -- for example by changing its own source code. As we continue to create more and more intelligent agents, chances increase that they will learn about this ability. The question is: will they want to use it? For example, highly intelligent systems may find ways to change their goals to something more easily achievable, thereby `escaping' the control of their designers. In an important paper, Omohundro (2008) argued that goal preservation is a fundamental drive of any intelligent system, since a goal is more likely to be achieved if future versions of the agent strive towards the same goal. In this paper, we formalise this argument in general reinforcement learning, and explore situations where it fails. Our conclusion is that the self-modification possibility is harmless if and only if the value function of the agent anticipates the consequences of self-modifications and use the current utility function when evaluating the future.Comment: Artificial General Intelligence (AGI) 201

    Measuring Progress in Robotics: Benchmarking and the ‘Measure-Target Confusion’

    Get PDF
    While it is often said that in order to qualify as a true science robotics should aspire to reproducible and measurable results that allow benchmarking, I argue that a focus on benchmarking will be a hindrance for progress. Several academic disciplines that have been led into pursuing only reproducible and measurable ‘scientific’ results—robotics should be careful not to fall into that trap. Results that can be benchmarked must be specific and context-dependent, but robotics targets whole complex systems independently of a specific context—so working towards progress on the technical measure risks missing that target. It would constitute aiming for the measure rather than the target: what I call ‘measure-target confusion’. The role of benchmarking in robotics shows that the more general problem to measure progress towards more intelligent machines will not be solved by technical benchmarks; we need a balanced approach with technical benchmarks, real-life testing and qualitative judgment

    The quantum cryptographic switch

    Full text link
    We illustrate using a quantum system the principle of a cryptographic switch, in which a third party (Charlie) can control to a continuously varying degree the amount of information the receiver (Bob) receives, after the sender (Alice) has sent her information. Suppose Charlie transmits a Bell state to Alice and Bob. Alice uses dense coding to transmit two bits to Bob. Only if the 2-bit information corresponding to choice of Bell state is made available by Charlie to Bob can the latter recover Alice's information. By varying the information he gives, Charlie can continuously vary the information recovered by Bob. The performance of the protocol subjected to the squeezed generalized amplitude damping channel is considered. We also present a number of practical situations where a cryptographic switch would be of use.Comment: 7 pages, 4 Figure

    The Temporal Singularity: time-accelerated simulated civilizations and their implications

    Full text link
    Provided significant future progress in artificial intelligence and computing, it may ultimately be possible to create multiple Artificial General Intelligences (AGIs), and possibly entire societies living within simulated environments. In that case, it should be possible to improve the problem solving capabilities of the system by increasing the speed of the simulation. If a minimal simulation with sufficient capabilities is created, it might manage to increase its own speed by accelerating progress in science and technology, in a way similar to the Technological Singularity. This may ultimately lead to large simulated civilizations unfolding at extreme temporal speedups, achieving what from the outside would look like a Temporal Singularity. Here we discuss the feasibility of the minimal simulation and the potential advantages, dangers, and connection to the Fermi paradox of the Temporal Singularity. The medium-term importance of the topic derives from the amount of computational power required to start the process, which could be available within the next decades, making the Temporal Singularity theoretically possible before the end of the century.Comment: To appear in the conference proceedings of the AGI-18 conference (published in the Springer's Lecture Notes in AI series
    corecore