1,478 research outputs found
Is there a "too big to fail" problem in the field?
We use the Arecibo legacy fast ALFA (ALFALFA) 21cm survey to measure the
number density of galaxies as a function of their rotational velocity,
(as inferred from the width of their 21cm emission line).
Based on the measured velocity function we statistically connect galaxies with
their host halo, via abundance matching. In a lambda cold dark matter
(CDM) cosmology, dwarf galaxies are expected to be hosted by halos
that are significantly more massive than indicated by the measured galactic
velocity; if smaller halos were allowed to host galaxies, then ALFALFA would
measure a much higher galactic number density. We then seek observational
verification of this predicted trend by analyzing the kinematics of a
literature sample of gas-rich dwarf galaxies. We find that galaxies with
are
kinematically incompatible with their predicted CDM host halos, in the
sense that hosts are too massive to be accommodated within the measured
galactic rotation curves. This issue is analogous to the "too big to fail"
problem faced by the bright satellites of the Milky Way, but here it concerns
extreme dwarf galaxies in the field. Consequently, solutions based on
satellite-specific processes are not applicable in this context. Our result
confirms the findings of previous studies based on optical survey data and
addresses a number of observational systematics present in these works.
Furthermore, we point out the assumptions and uncertainties that could strongly
affect our conclusions. We show that the two most important among them -namely
baryonic effects on the abundances of halos and on the rotation curves of
halos- do not seem capable of resolving the reported discrepancy.Comment: v3 matches the version published in A&A. Main differences with v2 are
in Secs 3.2 & 4.4 and the addition of Appendix B. 11 figures, 14 pages (+2
appendices
Spectroscopic Confusion: Its Impact on Current and Future Extragalactic HI Surveys
We present a comprehensive model to predict the rate of spectroscopic
confusion in HI surveys, and demonstrate good agreement with the observable
confusion in existing surveys. Generically the action of confusion on the HI
mass function was found to be a suppression of the number count of sources
below the `knee', and an enhancement above it. This results in a bias, whereby
the `knee' mass is increased and the faint end slope is steepened. For ALFALFA
and HIPASS we find that the maximum impact this bias can have on the Schechter
fit parameters is similar in magnitude to the published random errors. On the
other hand, the impact of confusion on the HI mass functions of upcoming medium
depth interferometric surveys, will be below the level of the random errors. In
addition, we find that previous estimates of the number of detections for
upcoming surveys with SKA-precursor telescopes may have been too optimistic, as
the framework implemented here results in number counts between 60% and 75% of
those previously predicted, while accurately reproducing the counts of existing
surveys. Finally, we argue that any future single dish, wide area surveys of HI
galaxies would be best suited to focus on deep observations of the local
Universe (z < 0.05), as confusion may prevent them from being competitive with
interferometric surveys at higher redshift, while their lower angular
resolution allows their completeness to be more easily calibrated for nearby
extended sources.Comment: Accepted to MNRAS, 14 pages, 9 figures, 2 table
When is Stacking Confusing?: The Impact of Confusion on Stacking in Deep HI Galaxy Surveys
We present an analytic model to predict the HI mass contributed by confused
sources to a stacked spectrum in a generic HI survey. Based on the ALFALFA
correlation function, this model is in agreement with the estimates of
confusion present in stacked Parkes telescope data, and was used to predict how
confusion will limit stacking in the deepest SKA-precursor HI surveys. Stacking
with LADUMA and DINGO UDEEP data will only be mildly impacted by confusion if
their target synthesised beam size of 10 arcsec can be achieved. Any beam size
significantly above this will result in stacks that contain a mass in confused
sources that is comparable to (or greater than) that which is detectable via
stacking, at all redshifts. CHILES' 5 arcsec resolution is more than adequate
to prevent confusion influencing stacking of its data, throughout its bandpass
range. FAST will be the most impeded by confusion, with HI surveys likely
becoming heavily confused much beyond z = 0.1. The largest uncertainties in our
model are the redshift evolution of the HI density of the Universe and the HI
correlation function. However, we argue that the two idealised cases we adopt
should bracket the true evolution, and the qualitative conclusions are
unchanged regardless of the model choice. The profile shape of the signal due
to confusion (in the absence of any detection) was also modelled, revealing
that it can take the form of a double Gaussian with a narrow and wide
component.Comment: 11 pages, 6 figures, accepted to MNRA
Speaker change detection using BIC: a comparison on two datasets
Abstract — This paper addresses the problem of unsupervised speaker change detection. We assume that there is no prior knowledge on the number of speakers or their identities. Two methods are tested. The first method uses the Bayesian Information Criterion (BIC), investigates the AudioSpectrumCentroid and AudioWaveformEnvelope features, and implements a dynamic thresholding followed by a fusion scheme. The second method is a real-time one that uses a metric-based approach employing line spectral pairs (LSP) and the BIC criterion to validate a potential change point. The experiments are carried out on two different datasets. The first set was created by concatenating speakers from the TIMIT database and is referred to as the TIMIT data set. The second set was created by using recordings from the MPEG-7 test set CD1 and broadcast news and is referred to as the INESC dataset. I
Has the accession of Greece in the EU influenced the dynamics of the country’s “twin deficits”? An empirical investigation
This paper investigates the existence of possible causal linkages between the internal and external imbalances of the Greek economy, over the period 1960-2007, as well as the directions of the detected causal effects. Actually, it tests empirically the validity and rationale of the “twin deficits” hypothesis, taking into consideration the impact of the accession of Greece in the European Economic Community in 1981, which constitutes a great institutional change. By means of the ARDL cointegration methodology, errorcorrection modeling and Granger causality, we find evidence in favor of the “twin deficits hypothesis” for the Greek case over the pre-accession period (1960-1980), with causality running from the budget deficit to the trade deficit. However, over the post-accession period (1981-2007) the causal relationship is reversed, indicating changes in the linking mechanism of the two deficits and providing useful inferences for the national economic policy.peer-reviewe
- …