10,249 research outputs found
Selection bias in dynamically-measured super-massive black hole samples: consequences for pulsar timing arrays
Supermassive black hole -- host galaxy relations are key to the computation
of the expected gravitational wave background (GWB) in the pulsar timing array
(PTA) frequency band. It has been recently pointed out that standard relations
adopted in GWB computations are in fact biased-high. We show that when this
selection bias is taken into account, the expected GWB in the PTA band is a
factor of about three smaller than previously estimated. Compared to other
scaling relations recently published in the literature, the median amplitude of
the signal at yr drops from to
. Although this solves any potential tension between
theoretical predictions and recent PTA limits without invoking other dynamical
effects (such as stalling, eccentricity or strong coupling with the galactic
environment), it also makes the GWB detection more challenging.Comment: 6 pages 4 figures, submitted to MNRAS letter
Allocation of risk capital in a cost cooperative game induced by a modified expected shortfall
The standard theory of coherent risk measures fails to consider individual institutions as part of a system which might itself experience instability and spread new sources of risk to the market participants. This paper fills this gap and proposes a cooperative market game where agents and institutions play the same role. We take into account a multiple institutions framework where some institutions jointly experience distress, and evaluate their individual and collective impact on the remaining institutions in the market. To carry out the analysis, we define a new risk measure (SCoES) which is a generalization of the Expected Shortfall of and we characterize the riskiness profile as the outcome of a cost cooperative game played by institutions in distress. Each institution’s marginal contribution to the spread of riskiness towards the safe institutions in then evaluated by calculating suitable solution concepts of the game such as the Banzhaf–Coleman and the Shapley–Shubik values.
This is an Accepted Manuscript of an article published by Taylor & Francis in Journal of the Operational Research Society on 16/12/2019, available online: http://www.tandfonline.com/10.1080/01605682.2019.168695
The Efficiency of Question-Asking Strategies in a Real-World Visual Search Task
In recent years, a multitude of datasets of human–human conversations has been released for the main purpose of training conversational agents based on data-hungry artificial neural networks. In this paper, we argue that datasets of this sort represent a useful and underexplored source to validate, complement, and enhance cognitive studies on human behavior and language use. We present a method that leverages the recent development of powerful computational models to obtain the fine-grained annotation required to apply metrics and techniques from Cognitive Science to large datasets. Previous work in Cognitive Science has investigated the question-asking strategies of human participants by employing different variants of the so-called 20-question-game setting and proposing several evaluation methods. In our work, we focus on GuessWhat, a task proposed within the Computer Vision and Natural Language Processing communities that is similar in structure to the 20-question-game setting. Crucially, the GuessWhat dataset contains tens of thousands of dialogues based on real-world images, making it a suitable setting to investigate the question-asking strategies of human players on a large scale and in a natural setting. Our results demonstrate the effectiveness of computational tools to automatically code how the hypothesis space changes throughout the dialogue in complex visual scenes. On the one hand, we confirm findings from previous work on smaller and more controlled settings. On the other hand, our analyses allow us to highlight the presence of “uninformative” questions (in terms of Expected Information Gain) at specific rounds of the dialogue. We hypothesize that these questions fulfill pragmatic constraints that are exploited by human players to solve visual tasks in complex scenes successfully. Our work illustrates a method that brings together efforts and findings from different disciplines to gain a better understanding of human question-asking strategies on large-scale datasets, while at the same time posing new questions about the development of conversational systems.</p
Post-correlation radio frequency interference classification methods
We describe and compare several post-correlation radio frequency interference
classification methods. As data sizes of observations grow with new and
improved telescopes, the need for completely automated, robust methods for
radio frequency interference mitigation is pressing. We investigated several
classification methods and find that, for the data sets we used, the most
accurate among them is the SumThreshold method. This is a new method formed
from a combination of existing techniques, including a new way of thresholding.
This iterative method estimates the astronomical signal by carrying out a
surface fit in the time-frequency plane. With a theoretical accuracy of 95%
recognition and an approximately 0.1% false probability rate in simple
simulated cases, the method is in practice as good as the human eye in finding
RFI. In addition it is fast, robust, does not need a data model before it can
be executed and works in almost all configurations with its default parameters.
The method has been compared using simulated data with several other mitigation
techniques, including one based upon the singular value decomposition of the
time-frequency matrix, and has shown better results than the rest.Comment: 14 pages, 12 figures (11 in colour). The software that was used in
the article can be downloaded from http://www.astro.rug.nl/rfi-software
PC1643+4631A,B: The Lyman-Alpha Forest at the Edge of Coherence
This is the first measurement and detection of coherence in the intergalactic
medium (IGM) at substantially high redshift (z~3.8) and on large physical
scales (~2.5 h^-1 Mpc). We perform the measurement by presenting new
observations from Keck LRIS of the high redshift quasar pair PC 1643+4631A, B
and their Ly-alpha absorber coincidences. This experiment extends multiple
sightline quasar absorber studies to higher redshift, higher opacity, larger
transverse separation, and into a regime where coherence across the IGM becomes
weak and difficult to detect. We fit 222 discrete Ly-alpha absorbers to
sightline A and 211 to sightline B. Relative to a Monte Carlo pairing test
(using symmetric, nearest neighbor matching) the data exhibit a 4sigma excess
of pairs at low velocity splitting (<150 km/s), thus detecting coherence on
transverse scales of ~2.5 h^-1 Mpc. We use spectra extracted from an SPH
simulation to analyze symmetric pair matching, transmission distributions as a
function of redshift and compute zero-lag cross-correlations to compare with
the quasar pair data. The simulations agree with the data with the same
strength (~4sigma) at similarly low velocity splitting above random chance
pairings. In cross-correlation tests, the simulations agree when the mean flux
(as a function of redshift) is assumed to follow the prescription given by
Kirkman et al. (2005). While the detection of flux correlation (measured
through coincident absorbers and cross-correlation amplitude) is only
marginally significant, the agreement between data and simulations is
encouraging for future work in which even better quality data will provide the
best insight into the overarching structure of the IGM and its understanding as
shown by SPH simulations.Comment: 15 pages, 11 figures; accepted for publication in Astronomical
Journa
A Search for the Most Massive Galaxies. II. Structure, Environment and Formation
We study a sample of 43 early-type galaxies, selected from the SDSS because
they appeared to have velocity dispersion > 350 km/s. High-resolution
photometry in the SDSS i passband using HRC-ACS on board the HST shows that
just less than half of the sample is made up of superpositions of two or three
galaxies, so the reported velocity dispersion is incorrect. The other half of
the sample is made up of single objects with genuinely large velocity
dispersions. None of these objects has sigma larger than 426 +- 30 km/s. These
objects define rather different relations than the bulk of the early-type
galaxy population: for their luminosities, they are the smallest, most massive
and densest galaxies in the Universe. Although the slopes of the scaling
relations they define are rather different from those of the bulk of the
population, they lie approximately parallel to those of the bulk "at fixed
sigma". These objects appear to be of two distinct types: the less luminous
(M_r>-23) objects are rather flattened and extremely dense for their
luminosities -- their properties suggest some amount of rotational support and
merger histories with abnormally large amounts of gaseous dissipation. The more
luminous objects (M_r<-23) tend to be round and to lie in or at the centers of
clusters. Their properties are consistent with the hypothesis that they are
BCGs. Models in which BCGs form from predominantly radial mergers having little
angular momentum predict that they should be prolate. If viewed along the major
axis, such objects would appear to have abnormally large sigma for their sizes,
and to be abnormally round for their luminosities. This is true of the objects
in our sample once we account for the fact that the most luminous galaxies
(M_r<-23.5), and BCGs, become slightly less round with increasing luminosity.Comment: 21 pages, 19 figures, accepted for publication in MNRA
- …