1,337 research outputs found
Relativistic Disk Reflection in the Neutron Star X-ray Binary XTE J1709-267 with NuSTAR
We perform the first reflection study of the soft X-ray transient and Type 1
burst source XTE J1709-267 using NuSTAR observations during its 2016 June
outburst. There was an increase in flux near the end of the observations, which
corresponds to an increase from 0.04 L to 0.06
L assuming a distance of 8.5 kpc. We have separately examined
spectra from the low and high flux intervals, which were soft and show evidence
of a broad Fe K line. Fits to these intervals with relativistic disk reflection
models have revealed an inner disk radius of (where
) for the low flux spectrum and
for the high flux spectrum at the 90\% confidence level. The disk is likely
truncated by a boundary layer surrounding the neutron star or the
magnetosphere. Based on the measured luminosity and using the accretion
efficiency for a disk around a neutron star, we estimate that the theoretically
expected size for the boundary layer would be from the
neutron star's surface, which can be increased by spin or viscosity effects.
Another plausible scenario is that the disk could be truncated by the
magnetosphere. We place a conservative upper limit on the strength of the
magnetic field at the poles, assuming and , of
G, though X-ray pulsations have not been detected
from this source.Comment: Accepted for publication in ApJ, 5 pages, 4 figures, 1 table. arXiv
admin note: text overlap with arXiv:1701.0177
Prediction and explanation in the multiverse
Probabilities in the multiverse can be calculated by assuming that we are
typical representatives in a given reference class. But is this class well
defined? What should be included in the ensemble in which we are supposed to be
typical? There is a widespread belief that this question is inherently vague,
and that there are various possible choices for the types of reference objects
which should be counted in. Here we argue that the ``ideal'' reference class
(for the purpose of making predictions) can be defined unambiguously in a
rather precise way, as the set of all observers with identical information
content. When the observers in a given class perform an experiment, the class
branches into subclasses who learn different information from the outcome of
that experiment. The probabilities for the different outcomes are defined as
the relative numbers of observers in each subclass. For practical purposes,
wider reference classes can be used, where we trace over all information which
is uncorrelated to the outcome of the experiment, or whose correlation with it
is beyond our current understanding. We argue that, once we have gathered all
practically available evidence, the optimal strategy for making predictions is
to consider ourselves typical in any reference class we belong to, unless we
have evidence to the contrary. In the latter case, the class must be
correspondingly narrowed.Comment: Minor clarifications adde
Anthropic reasoning in multiverse cosmology and string theory
Anthropic arguments in multiverse cosmology and string theory rely on the
weak anthropic principle (WAP). We show that the principle, though ultimately a
tautology, is nevertheless ambiguous. It can be reformulated in one of two
unambiguous ways, which we refer to as WAP_1 and WAP_2. We show that WAP_2, the
version most commonly used in anthropic reasoning, makes no physical
predictions unless supplemented by a further assumption of "typicality", and we
argue that this assumption is both misguided and unjustified. WAP_1, however,
requires no such supplementation; it directly implies that any theory that
assigns a non-zero probability to our universe predicts that we will observe
our universe with probability one. We argue, therefore, that WAP_1 is
preferable, and note that it has the benefit of avoiding the inductive
overreach characteristic of much anthropic reasoning.Comment: 7 pages. Expanded discussion of selection effects and some minor
clarifications, as publishe
An Infrared Divergence Problem in the cosmological measure theory and the anthropic reasoning
An anthropic principle has made it possible to answer the difficult question
of why the observable value of cosmological constant (
GeV) is so disconcertingly tiny compared to predicted value of vacuum
energy density GeV. Unfortunately, there is a
darker side to this argument, as it consequently leads to another absurd
prediction: that the probability to observe the value for randomly
selected observer exactly equals to 1. We'll call this controversy an infrared
divergence problem. It is shown that the IRD prediction can be avoided with the
help of a Linde-Vanchurin {\em singular runaway measure} coupled with the
calculation of relative Bayesian probabilities by the means of the {\em
doomsday argument}. Moreover, it is shown that while the IRD problem occurs for
the {\em prediction stage} of value of , it disappears at the {\em
explanatory stage} when has already been measured by the observer.Comment: 9 pages, RevTe
AAAI: an Argument Against Artificial Intelligence
The ethical concerns regarding the successful development of an Artificial Intelligence have received a lot of attention lately. The idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of an AI causing extreme human suffering is important enough to warrant serious consideration. Others look at this problem from the opposite perspective, namely that of the AI itself. Here the idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of humanity causing extreme suffering to an AI is important enough to warrant serious consideration. This paper starts from the observation that both concerns rely on problematic philosophical assumptions. Rather than tackling these assumptions directly, it proceeds to present an argument that if one takes these assumptions seriously, then one has a moral obligation to advocate for a ban on the development of a conscious AI
TRANSHUMANISM AND MORAL EQUALITY
Conservative thinkers such as Francis Fukuyama have produced a battery of objections to the transhumanist project of fundamentally enhancing human capacities. This article examines one of these objections, namely that by allowing some to greatly extend their capacities, we will undermine the fundamental moral equality of human beings. I argue that this objection is groundless: once we understand the basis for human equality, it is clear that anyone who now has sufficient capacities to count as a person from the moral point of view will continue to count as one even if others are fundamentally enhanced; and it is mistaken to think that a creature which had even far greater capacities than an unenhanced human being should count as more than an equal from the moral point of view
Self-Modification of Policy and Utility Function in Rational Agents
Any agent that is part of the environment it interacts with and has versatile
actuators (such as arms and fingers), will in principle have the ability to
self-modify -- for example by changing its own source code. As we continue to
create more and more intelligent agents, chances increase that they will learn
about this ability. The question is: will they want to use it? For example,
highly intelligent systems may find ways to change their goals to something
more easily achievable, thereby `escaping' the control of their designers. In
an important paper, Omohundro (2008) argued that goal preservation is a
fundamental drive of any intelligent system, since a goal is more likely to be
achieved if future versions of the agent strive towards the same goal. In this
paper, we formalise this argument in general reinforcement learning, and
explore situations where it fails. Our conclusion is that the self-modification
possibility is harmless if and only if the value function of the agent
anticipates the consequences of self-modifications and use the current utility
function when evaluating the future.Comment: Artificial General Intelligence (AGI) 201
- …