11,611 research outputs found

    Black Hole Masses of High-Redshift Quasars

    Full text link
    Black-hole masses of distant quasars cannot be measured directly, but can be estimated to within a factor 3 to 5 using scaling relationships involving the quasar luminosity and broad-line width. Why such relationships are reasonable is summarized. The results of applying scaling relationships to data of quasars at a range of redshifts (z <= 6.3) are presented. Luminous quasars typically have masses of order 1 billion solar masses even at the highest redshifts. The fact that such massive black holes appear as early as at z ~ 6 indicate that black holes form very early or build up mass very fast.Comment: 4 pages with 2 figures, to appear in proceedings of Multiwavelength AGN Surveys, ed. R. Mujica and R. Maiolino (Singapore: World Scientific), 200

    Black-Hole Mass Measurements

    Get PDF
    The applicability and apparent uncertainties of the techniques currently available for measuring or estimating black-hole masses in AGNs are briefly summarized.Comment: 6 pages. Invited review at `AGN Physics with the Sloan Digital Sky Survey' conference (July 2003), eds. G. Richards and P. Hall (ASP Conf. Series, 2004

    Optimizing experimental parameters for tracking of diffusing particles

    Get PDF
    We describe how a single-particle tracking experiment should be designed in order for its recorded trajectories to contain the most information about a tracked particle's diffusion coefficient. The precision of estimators for the diffusion coefficient is affected by motion blur, limited photon statistics, and the length of recorded time-series. We demonstrate for a particle undergoing free diffusion that precision is negligibly affected by motion blur in typical experiments, while optimizing photon counts and the number of recorded frames is the key to precision. Building on these results, we describe for a wide range of experimental scenarios how to choose experimental parameters in order to optimize the precision. Generally, one should choose quantity over quality: experiments should be designed to maximize the number of frames recorded in a time-series, even if this means lower information content in individual frames

    DISCARD BEHAVIOR, HIGHGRADING AND REGULATION: THE CASE OF THE GREENLAND SHRIMP FISHERY

    Get PDF
    A formal economic analysis of the discarding problem is presented, focusing on the individual fisherman and the effect of different regulations on the fisherman's incentives to discard. It is shown that in a nonregulated fishery, either multispecies or single species/multisize, where the only constraints are the hold capacity and the length of the season, the fisherman may have rational incentives to discard/highgrade, if the marginal trip profit of an extra fishing day is greater than the average trip profit. Regulation by TAC does not change the incentives to discard. However, under INTQs and ITOs, the incentives to discard increase. The incentives to discard decrease under ITQs compared to INTQs, if the unit quota price is smaller than the shadow price of the quota. The model is applied to the Greenland shrimp fishery, where it demonstrates the reported discard behavior in the fishery. Finally, different regulations of discard are applied and discussed in the model. The analysis suggests that regulation of fishing days could be a promising alternative to usual suggested measures like tax/subsidies and landings obligations.Resource /Energy Economics and Policy,

    The G20 has served its purpose and should be replaced with a Global Economic Council on a firmer constitutional foundation

    Get PDF
    Robert Wade and Jakob Vestergaard argue that by permanently excluding 172 countries, the G20 deprives the large majority of nations of voice on matters that may crucially affect them. They believe it should be replaced by a Global Economic Council (GEC) based on a delegated voting system, and here they provide details of what this might look like

    Systematic Uncertainties in Black Hole Masses Determined from Single Epoch Spectra

    Get PDF
    We explore the nature of systematic errors that can arise in measurement of black hole masses from single-epoch spectra of active galactic nuclei (AGNs) by utilizing the many epochs available for NGC 5548 and PG1229+204 from reverberation mapping databases. In particular, we examine systematics due to AGN variability, contamination due to constant spectral components (i.e., narrow lines and host galaxy flux), data quality (i.e., signal-to-noise ratio, S/N), and blending of spectral features by comparing the precision and accuracy of single-epoch mass measurements to those of recent reverberation mapping studies. We calculate masses by characterizing the broad Hbeta emission line by both the full width at half maximum and the line dispersion and demonstrate the importance of removing narrow emission-line components and host starlight. We find that the reliability of line width measurements rapidly decreases for S/N lower than ~10 to 20 (per pixel) and that fitting the line profiles instead of direct measurement of the data does not mitigate this problem but can, in fact, introduce systematic errors. We also conclude that a full spectral decomposition to deblend the AGN and galaxy spectral features is unnecessary except to judge the contribution of the host galaxy to the luminosity and to deblend any emission lines that may inhibit accurate line width measurements. Finally, we present an error budget which summarizes the minimum observable uncertainties as well as the amount of additional scatter and/or systematic offset that can be expected from the individual sources of error investigated. In particular, we find that the minimum observable uncertainty in single-epoch mass estimates due to variability is ~ 20 per pixel) spectra.Comment: 60 pages, 20 figures, accepted for publication in Ap

    Temporal Gillespie algorithm: Fast simulation of contagion processes on time-varying networks

    Full text link
    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling.Comment: Minor changes and updates to reference

    An Empirical Ultraviolet Iron Spectrum Template Applicable to Active Galaxies

    Get PDF
    Iron emission is often a severe contaminant in optical-ultraviolet spectra of active galaxies. Its presence complicates emission line studies. A viable solution, already successfully applied at optical wavelengths, is to use an empirical iron emission template. We have generated FeII and FeIII templates for ultraviolet active galaxy spectra based on HST archival 1100 - 3100 A spectra of IZw1. Their application allows fitting and subtraction of the iron emission in active galaxy spectra. This work has shown that in particular CIII] lambda 1909 can be heavily contaminated by other line emission, including iron transitions. Details of the data processing, generation, and use of the templates, are given by Vestergaard & Wilkes (2001).Comment: 4 pages, including 1 figure, to appear in "Spectroscopic Challenges of Photoionized Plasmas", ASP Conf. Series, Eds. Gary Ferland and Daniel Wolf Savi

    Fisheries Management with Multiple Market Failures

    Get PDF
    Within fisheries it is well-known that several market failures exist. However, fisheries economists analyse these market failures separately despite the fact that the market failures arise simultaneously. In this paper several market fail-ures that arise simultaneously are analysed. A resource stock tax and a tax on self-reported harvest are considered as a solution to problems associated with the stock externality, measuring individual catches and stock uncertainty. Within a fisheries economic model it is shown that it will be in the interest of risk-averse fishermen to report a part of their catch even without a control pol-icy. In addition, it is shown that this tax structure can secure optimal expected individual catches and simulations show that the tax payment is very low. Thus, the tax system may be useful in practical fisheries management.Prices regulation, Quantity regulation, Asymmetric Information, Self-Reporting, Stock Tax and Harvest Tax
    • …
    corecore