2,183 research outputs found

    An XMM-Newton View of the Radio Galaxy 3C 411

    Full text link
    We present the first high signal-to-noise XMM-Newton observations of the broad-line radio galaxy 3C 411. After fitting various spectral models, an absorbed double power-law continuum and a blurred relativistic disk reflection model (kdblur) are found to be equally plausible descriptions of the data. While the softer power-law component (Γ\Gamma=2.11) of the double power-law model is entirely consistent with that found in Seyfert galaxies (and hence likely originates from a disk corona), the additional power law component is very hard (Γ\Gamma=1.05); amongst the AGN zoo, only flat-spectrum radio quasars have such hard spectra. Together with the very flat radio-spectrum displayed by this source, we suggest that it should instead be classified as a FSRQ. This leads to potential discrepancies regarding the jet inclination angle, with the radio morphology suggesting a large jet inclination but the FSRQ classification suggesting small inclinations. The kdblur model predicts an inner disk radius of at most 20 rg_g and relativistic reflection

    Prediction and explanation in the multiverse

    Get PDF
    Probabilities in the multiverse can be calculated by assuming that we are typical representatives in a given reference class. But is this class well defined? What should be included in the ensemble in which we are supposed to be typical? There is a widespread belief that this question is inherently vague, and that there are various possible choices for the types of reference objects which should be counted in. Here we argue that the ``ideal'' reference class (for the purpose of making predictions) can be defined unambiguously in a rather precise way, as the set of all observers with identical information content. When the observers in a given class perform an experiment, the class branches into subclasses who learn different information from the outcome of that experiment. The probabilities for the different outcomes are defined as the relative numbers of observers in each subclass. For practical purposes, wider reference classes can be used, where we trace over all information which is uncorrelated to the outcome of the experiment, or whose correlation with it is beyond our current understanding. We argue that, once we have gathered all practically available evidence, the optimal strategy for making predictions is to consider ourselves typical in any reference class we belong to, unless we have evidence to the contrary. In the latter case, the class must be correspondingly narrowed.Comment: Minor clarifications adde

    Spatial Regulation of Air Toxics Hot Spots

    Get PDF
    This paper analyzes the potential implications, in terms of net social costs and distribution of risks and abatement costs, of a policy to address the problem of air toxics “hot spots.” The policy we analyze involves regulation of air toxics sources at increasingly finer spatial resolutions. We develop a model of a decisionmaker choosing emission standards within a net cost minimization framework. Empirical application of the model to two counties in Florida demonstrates that regulation at finer resolutions could involve trade-offs between net social costs and equitable distribution of risks and, in some settings, between individual and population risks

    Hot Spots Regulation and Environmental Justice

    Get PDF
    This paper analyzes whether regulating “hot spots” of toxic air pollution by increasing the spatial resolution of regulation could address environmental justice (EJ) concerns. To examine this question, this paper develops a decision model of a regulator choosing emission controls within a net cost minimizing framework. An empirical application of the model using air toxic emission data for Escambia and Santa Rosa Counties in Florida estimates the emission standards and spatial distribution of risks at a coarse and a finer spatial resolutions. Implications for EJ are analyzed by combining the simulated spatial risk distributions at the two resolutions with the demographic data. Results indicate that different measures of EJ point to different conclusions regarding the question of whether finer resolution regulation alleviates EJ concerns. The paper concludes with a discussion of the implications for EJ policy

    Lil-Bevo: Explorations of Strategies for Training Language Models in More Humanlike Ways

    Full text link
    We present Lil-Bevo, our submission to the BabyLM Challenge. We pretrained our masked language models with three ingredients: an initial pretraining with music data, training on shorter sequences before training on longer ones, and masking specific tokens to target some of the BLiMP subtasks. Overall, our baseline models performed above chance, but far below the performance levels of larger LLMs trained on more data. We found that training on short sequences performed better than training on longer sequences.Pretraining on music may help performance marginally, but, if so, the effect seems small. Our targeted Masked Language Modeling augmentation did not seem to improve model performance in general, but did seem to help on some of the specific BLiMP tasks that we were targeting (e.g., Negative Polarity Items). Training performant LLMs on small amounts of data is a difficult but potentially informative task. While some of our techniques showed some promise, more work is needed to explore whether they can improve performance more than the modest gains here. Our code is available at https://github.com/venkatasg/Lil-Bevo and out models at https://huggingface.co/collections/venkatasg/babylm-653591cdb66f4bf68922873aComment: Proceedings of the BabyLM Challeng

    Anthropic reasoning in multiverse cosmology and string theory

    Get PDF
    Anthropic arguments in multiverse cosmology and string theory rely on the weak anthropic principle (WAP). We show that the principle, though ultimately a tautology, is nevertheless ambiguous. It can be reformulated in one of two unambiguous ways, which we refer to as WAP_1 and WAP_2. We show that WAP_2, the version most commonly used in anthropic reasoning, makes no physical predictions unless supplemented by a further assumption of "typicality", and we argue that this assumption is both misguided and unjustified. WAP_1, however, requires no such supplementation; it directly implies that any theory that assigns a non-zero probability to our universe predicts that we will observe our universe with probability one. We argue, therefore, that WAP_1 is preferable, and note that it has the benefit of avoiding the inductive overreach characteristic of much anthropic reasoning.Comment: 7 pages. Expanded discussion of selection effects and some minor clarifications, as publishe

    An Infrared Divergence Problem in the cosmological measure theory and the anthropic reasoning

    Full text link
    An anthropic principle has made it possible to answer the difficult question of why the observable value of cosmological constant (Λ1047\Lambda\sim 10^{-47} GeV4{}^4) is so disconcertingly tiny compared to predicted value of vacuum energy density ρSUSY1012\rho_{SUSY}\sim 10^{12} GeV4{}^4. Unfortunately, there is a darker side to this argument, as it consequently leads to another absurd prediction: that the probability to observe the value Λ=0\Lambda=0 for randomly selected observer exactly equals to 1. We'll call this controversy an infrared divergence problem. It is shown that the IRD prediction can be avoided with the help of a Linde-Vanchurin {\em singular runaway measure} coupled with the calculation of relative Bayesian probabilities by the means of the {\em doomsday argument}. Moreover, it is shown that while the IRD problem occurs for the {\em prediction stage} of value of Λ\Lambda, it disappears at the {\em explanatory stage} when Λ\Lambda has already been measured by the observer.Comment: 9 pages, RevTe

    Self-Modification of Policy and Utility Function in Rational Agents

    Full text link
    Any agent that is part of the environment it interacts with and has versatile actuators (such as arms and fingers), will in principle have the ability to self-modify -- for example by changing its own source code. As we continue to create more and more intelligent agents, chances increase that they will learn about this ability. The question is: will they want to use it? For example, highly intelligent systems may find ways to change their goals to something more easily achievable, thereby `escaping' the control of their designers. In an important paper, Omohundro (2008) argued that goal preservation is a fundamental drive of any intelligent system, since a goal is more likely to be achieved if future versions of the agent strive towards the same goal. In this paper, we formalise this argument in general reinforcement learning, and explore situations where it fails. Our conclusion is that the self-modification possibility is harmless if and only if the value function of the agent anticipates the consequences of self-modifications and use the current utility function when evaluating the future.Comment: Artificial General Intelligence (AGI) 201
    corecore