1,109 research outputs found
Forecasted 21 cm constraints on compensated isocurvature perturbations
A "compensated" isocurvature perturbation consists of an overdensity (or
underdensity) in the cold dark matter which is completely cancelled out by a
corresponding underdensity (or overdensity) in the baryons. Such a
configuration may be generated by a curvaton model of inflation if the cold
dark matter is created before curvaton decay and the baryon number is created
by the curvaton decay (or vice-versa). Compensated isocurvature perturbations,
at the level producible by the curvaton model, have no observable effect on
cosmic microwave background anisotropies or on galaxy surveys. They can be
detected through their effect on the distribution of neutral hydrogen between
redshifts 30 to 300 using 21 cm absorption observations. However, to obtain a
good signal to noise ratio, very large observing arrays are needed. We estimate
that a fast Fourier transform telescope would need a total collecting area of
about 20 square kilometers to detect a curvaton generated compensated
isocurvature perturbation at more than 5 sigma significance.Comment: 7 pages, v2: minor typos corrected, reflects PRD accepted versio
Distinguishing models of reionization using future radio observations of 21-cm 1-point statistics
We explore the impact of reionization topology on 21-cm statistics. Four
reionization models are presented which emulate large ionized bubbles around
over-dense regions (21CMFAST/ global-inside- out), small ionized bubbles in
over-dense regions (local-inside-out), large ionized bubbles around under-dense
regions (global-outside-in) and small ionized bubbles around under-dense
regions (local-outside-in). We show that first-generation instruments might
struggle to distinguish global models using the shape of the power spectrum
alone. All instruments considered are capable of breaking this degeneracy with
the variance, which is higher in outside-in models. Global models can also be
distinguished at small scales from a boost in the power spectrum from a
positive correlation between the density and neutral-fraction fields in
outside-in models. Negative skewness is found to be unique to inside-out models
and we find that pre-SKA instruments could detect this feature in maps smoothed
to reduce noise errors. The early, mid and late phases of reionization imprint
signatures in the brightness-temperature moments, we examine their model
dependence and find pre-SKA instruments capable of exploiting these timing
constraints in smoothed maps. The dimensional skewness is introduced and is
shown to have stronger signatures of the early and mid-phase timing if the
inside-out scenario is correct.Comment: 18 pages, 13 figures, updated to agree with published versio
The impact of spin temperature fluctuations on the 21-cm moments
This paper considers the impact of Lyman-alpha coupling and X-ray heating on
the 21-cm brightness-temperature one-point statistics (as predicted by
semi-numerical simulations). The X-ray production efficiency is varied over
four orders of magnitude and the hardness of the X-ray spectrum is varied from
that predicted for high-mass X-ray binaries, to the softer spectrum expected
from the hot inter-stellar medium. We find peaks in the redshift evolution of
both the variance and skewness associated with the efficiency of X-ray
production. The amplitude of the variance is also sensitive to the hardness of
the X-ray SED. We find that the relative timing of the coupling and heating
phases can be inferred from the redshift extent of a plateau that connects a
peak in the variance's evolution associated with Lyman-alpha coupling to the
heating peak. Importantly, we find that late X-ray heating would seriously
hamper our ability to constrain reionization with the variance. Late X-ray
heating also qualitatively alters the evolution of the skewness, providing a
clean way to constrain such models. If foregrounds can be removed, we find that
LOFAR, MWA and PAPER could constrain reionization and late X-ray heating models
with the variance. We find that HERA and SKA (phase 1) will be able to
constrain both reionization and heating by measuring the variance using
foreground-avoidance techniques. If foregrounds can be removed they will also
be able to constrain the nature of Lyman-alpha coupling.Comment: 16 pages, 13 figure, 1 table. Accepted for publication in MNRA
Making meaningful comparisons between road and rail â substituting average energy consumption data for rail with empirical analysis
Within the transport sector, modal shift towards more efficient and less polluting modes could be a key policy goal to help meet targets to reduce energy consumption and carbon emissions. However, making comparisons between modes is not necessarily straightforward. Average energy and emissions data are often relied upon, particularly for, rail, which may not be applicable to a given context. Some UK train operating companies have recently fitted electricity metres to their trains, from which energy consumption data have been obtained. This has enabled an understanding to be gained of how energy consumption and related emissions are affected by a number of factors, including train and service type. Comparisons are made with existing data for road and rail. It is noted that although more specific data can be useful in informing policy and making some decisions, average data continue to play an important role when considering the overall picture
A statistical framework for joint eQTL analysis in multiple tissues
Mapping expression Quantitative Trait Loci (eQTLs) represents a powerful and
widely-adopted approach to identifying putative regulatory variants and linking
them to specific genes. Up to now eQTL studies have been conducted in a
relatively narrow range of tissues or cell types. However, understanding the
biology of organismal phenotypes will involve understanding regulation in
multiple tissues, and ongoing studies are collecting eQTL data in dozens of
cell types. Here we present a statistical framework for powerfully detecting
eQTLs in multiple tissues or cell types (or, more generally, multiple
subgroups). The framework explicitly models the potential for each eQTL to be
active in some tissues and inactive in others. By modeling the sharing of
active eQTLs among tissues this framework increases power to detect eQTLs that
are present in more than one tissue compared with "tissue-by-tissue" analyses
that examine each tissue separately. Conversely, by modeling the inactivity of
eQTLs in some tissues, the framework allows the proportion of eQTLs shared
across different tissues to be formally estimated as parameters of a model,
addressing the difficulties of accounting for incomplete power when comparing
overlaps of eQTLs identified by tissue-by-tissue analyses. Applying our
framework to re-analyze data from transformed B cells, T cells and fibroblasts
we find that it substantially increases power compared with tissue-by-tissue
analysis, identifying 63% more genes with eQTLs (at FDR=0.05). Further the
results suggest that, in contrast to previous analyses of the same data, the
majority of eQTLs detectable in these data are shared among all three tissues.Comment: Summitted to PLoS Genetic
Cooperative Optical Non-linearity in a blockaded Rydberg Ensemble
This thesis describes the observation of a novel optical non-linearity mediated by the dipole-dipole interactions in a cold gas of Rydberg atoms. Electromagnetically induced transparency (EIT) is used to map the strong dipolar interactions onto an optical transition, resulting in a cooperative effect where the optical response of a single atom is modified by the surrounding atoms due to dipole blockade. This optical non-linearity is characterised as a function of probe power and density for both attractive and repulsive interactions, demonstrating a non-linear density dependence associated with cooperativity. For the case of repulsive interactions, excellent agreement is obtained at low densities between experimental data and an interacting three-atom model. The ability to tune the interactions with an external field is also verified.
This cooperative effect can be used to manipulate light at the single photon level, which is relevant for applications in quantum information processing. A theoretical model is developed to show that the non-linearity can be used to obtain a highly correlated single-photon output from a coherent laser field interacting with a single blockade region. Progress towards observing this experimentally is described, including details of the construction of a new apparatus capable of confining atoms to within a blockade radius
High-resolution mapping of cancer cell networks using co-functional interactions.
Powerful new technologies for perturbing genetic elements have recently expanded the study of genetic interactions in model systems ranging from yeast to human cell lines. However, technical artifacts can confound signal across genetic screens and limit the immense potential of parallel screening approaches. To address this problem, we devised a novel PCA-based method for correcting genome-wide screening data, bolstering the sensitivity and specificity of detection for genetic interactions. Applying this strategy to a set of 436 whole genome CRISPR screens, we report more than 1.5 million pairs of correlated "co-functional" genes that provide finer-scale information about cell compartments, biological pathways, and protein complexes than traditional gene sets. Lastly, we employed a gene community detection approach to implicate core genes for cancer growth and compress signal from functionally related genes in the same community into a single score. This work establishes new algorithms for probing cancer cell networks and motivates the acquisition of further CRISPR screen data across diverse genotypes and cell types to further resolve complex cellular processes
Future-proofing the state: managing risks, responding to crises and building resilience
Summary: This book focuses on the challenges facing governments and communities in preparing for and responding to major crises â especially the hard to predict yet unavoidable natural disasters ranging from earthquakes and tsunamis to floods and bushfires, as well as pandemics and global economic crises.
Future-proofing the state and our societies involves decision-makers developing capacities to learn from recent âdisasterâ experiences in order to be better placed to anticipate and prepare for foreseeable challenges. To undertake such futureproofing means taking long-term (and often recurring) problems seriously, managing risks appropriately, investing in preparedness, prevention and mitigation, reducing future vulnerability, building resilience in communities and institutions, and cultivating astute leadership. In the past we have often heard calls for âbetter future-proofingâ in the aftermath of disasters, but then neglected the imperatives of the message.
Future-Proofing the State is organised around four key themes: how can we better predict and manage the future; how can we transform the short-term thinking shaped by our political cycles into more effective long-term planning; how can we build learning into our preparations for future policies and management; and how can we successfully build trust and community resilience to meet future challenges more adequately
- âŠ