8,366 research outputs found
Long Term Wind-Driven X-Ray Spectral Variability of NGC 1365 with Swift
We present long-term (months-years) X-ray spectral variability of the Seyfert
1.8 galaxy NGC 1365 as observed by Swift, which provides well sampled
observations over a much longer timescale (6 years) and a much larger flux
range than is afforded by other observatories. At very low luminosities the
spectrum is very soft, becoming rapidly harder as the luminosity increases and
then, above a particular luminosity, softening again. At a given flux level,
the scatter in hardness ratio is not very large, meaning that the spectral
shape is largely determined by the luminosity. The spectra were therefore
summed in luminosity bins and fitted with a variety of models. The best fitting
model consists of two power laws, one unabsorbed and another, more luminous,
which is absorbed. In this model, we find a range of intrinsic 0.5-10.0 keV
luminosities of approximately 1.1-3.5 ergs/s, and a very large range of
absorbing columns, of approximately 10^22 - 10^24 cm^-2. Interestingly, we find
that the absorbing column decreases with increasing luminosity, but that this
result is not due to changes in ionisation. We suggest that these observations
might be interpreted in terms of a wind model in which the launch radius varies
as a function of ionising flux and disc temperature and therefore moves out
with increasing accretion rate, i.e. increasing X-ray luminosity. Thus,
depending on the inclination angle of the disc relative to the observer, the
absorbing column may decrease as the accretion rate goes up. The weaker,
unabsorbed, component may be a scattered component from the wind.Comment: 9 pages, 7 figures, accepted for publication in the Monthly Notices
of the Royal Astronomical Societ
Long-Term X-ray Spectral Variability of Seyfert Galaxies with Swift
We present analysis of the long-term X-ray spectral variability of Seyfert
galaxies as observed by Swift, which provides well-sampled observations over a
much larger flux range and a much longer timescale than any other X-ray
observatory. We examine long-term variability of three AGN: NGC 1365 (see
Connolly et al. 2014), Mkn 335 and NGC 5548. At high fluxes, the 0.5-10 keV
spectra soften with increasing flux, as seen previously within the 2-10 keV
band. However, at very low fluxes the sources also become very soft. We have
fitted a number of models to the data and find that both intrinsic luminosity
variability and variable absorption are required to explain the observations.
In some systems, e.g. NGC 1365, the best explanation is a two-component wind
model in which one component represents direct emission absorbed by a disc wind
wind, with the absorbing column inversely proportional to the intrinsic
luminosity, and the second component represents unabsorbed emission reflected
from the wind. In other AGN the situation is more complex.Comment: 6 pages, 5 figues, to appear in "Swift: 10 years of discovery",
Proceedings of Scienc
Reconstructing Galaxy Spectral Energy Distributions from Broadband Photometry
We present a novel approach to photometric redshifts, one that merges the
advantages of both the template fitting and empirical fitting algorithms,
without any of their disadvantages. This technique derives a set of templates,
describing the spectral energy distributions of galaxies, from a catalog with
both multicolor photometry and spectroscopic redshifts. The algorithm is
essentially using the shapes of the templates as the fitting parameters. From
simulated multicolor data we show that for a small training set of galaxies we
can reconstruct robustly the underlying spectral energy distributions even in
the presence of substantial errors in the photometric observations. We apply
these techniques to the multicolor and spectroscopic observations of the Hubble
Deep Field building a set of template spectra that reproduced the observed
galaxy colors to better than 10%. Finally we demonstrate that these improved
spectral energy distributions lead to a photometric-redshift relation for the
Hubble Deep Field that is more accurate than standard template-based
approaches.Comment: 23 pages, 8 figures, LaTeX AASTeX, accepted for publication in A
Calculation of High Energy Neutrino-Nucleon Cross Sections and Uncertainties Using the MSTW Parton Distribution Functions and Implications for Future Experiments
We present a new calculation of the cross sections for charged current (CC)
and neutral current (NC) and interactions in the neutrino
energy range GeV using the most recent MSTW parton
distribution functions (PDFs), MSTW 2008. We also present the associated
uncertainties propagated from the PDFs, as well as parametrizations of the
cross section central values, their uncertainty bounds, and the inelasticity
distributions for ease of use in Monte Carlo simulations. For the latter we
only provide parametrizations for energies above GeV. Finally, we assess
the feasibility of future neutrino experiments to constrain the cross
section in the ultra-high energy (UHE) regime using a technique that is
independent of the flux spectrum of incident neutrinos. A significant deviation
from the predicted Standard Model cross sections could be an indication of new
physics, such as extra space-time dimensions, and we present expected
constraints on such models as a function of the number of events observed in a
future subterranean neutrino detector.Comment: 20 pages, 13 figures, 5 tables, published in Phys.Rev.D. This version
fixes a typo in Equation 16 of the publication. Also since version v1, the
following changes are in v2 and also in the published version: tables with cs
values, parametrization of the y distribution at low-y improved, the
discussions on likelihood and also earth absorption are expanded, added a
needed minus sign in Eq. 17 of v
The Statistical Approach to Quantifying Galaxy Evolution
Studies of the distribution and evolution of galaxies are of fundamental
importance to modern cosmology; these studies, however, are hampered by the
complexity of the competing effects of spectral and density evolution.
Constructing a spectroscopic sample that is able to unambiguously disentangle
these processes is currently excessively prohibitive due to the observational
requirements. This paper extends and applies an alternative approach that
relies on statistical estimates for both distance (z) and spectral type to a
deep multi-band dataset that was obtained for this exact purpose.
These statistical estimates are extracted directly from the photometric data
by capitalizing on the inherent relationships between flux, redshift, and
spectral type. These relationships are encapsulated in the empirical
photometric redshift relation which we extend to z ~ 1.2, with an intrinsic
dispersion of dz = 0.06. We also develop realistic estimates for the
photometric redshift error for individual objects, and introduce the
utilization of the galaxy ensemble as a tool for quantifying both a
cosmological parameter and its measured error. We present deep, multi-band,
optical number counts as a demonstration of the integrity of our sample. Using
the photometric redshift and the corresponding redshift error, we can divide
our data into different redshift intervals and spectral types. As an example
application, we present the number redshift distribution as a function of
spectral type.Comment: 40 pages (LaTex), 21 Figures, requires aasms4.sty; Accepted by the
Astrophysical Journa
Astronomy in the Cloud: Using MapReduce for Image Coaddition
In the coming decade, astronomical surveys of the sky will generate tens of
terabytes of images and detect hundreds of millions of sources every night. The
study of these sources will involve computation challenges such as anomaly
detection and classification, and moving object tracking. Since such studies
benefit from the highest quality data, methods such as image coaddition
(stacking) will be a critical preprocessing step prior to scientific
investigation. With a requirement that these images be analyzed on a nightly
basis to identify moving sources or transient objects, these data streams
present many computational challenges. Given the quantity of data involved, the
computational load of these problems can only be addressed by distributing the
workload over a large number of nodes. However, the high data throughput
demanded by these applications may present scalability challenges for certain
storage architectures. One scalable data-processing method that has emerged in
recent years is MapReduce, and in this paper we focus on its popular
open-source implementation called Hadoop. In the Hadoop framework, the data is
partitioned among storage attached directly to worker nodes, and the processing
workload is scheduled in parallel on the nodes that contain the required input
data. A further motivation for using Hadoop is that it allows us to exploit
cloud computing resources, e.g., Amazon's EC2. We report on our experience
implementing a scalable image-processing pipeline for the SDSS imaging database
using Hadoop. This multi-terabyte imaging dataset provides a good testbed for
algorithm development since its scope and structure approximate future surveys.
First, we describe MapReduce and how we adapted image coaddition to the
MapReduce framework. Then we describe a number of optimizations to our basic
approach and report experimental results comparing their performance.Comment: 31 pages, 11 figures, 2 table
Quantum Hall Effect and Quantum Point Contact in Bilayer-Patched Epitaxial Graphene
We study an epitaxial graphene monolayer with bilayer inclusions via
magnetotransport measurements and scanning gate microscopy at low temperatures.
We find that bilayer inclusions can be metallic or insulating depending on the
initial and gated carrier density. The metallic bilayers act as equipotential
shorts for edge currents, while closely spaced insulating bilayers guide the
flow of electrons in the monolayer constriction, which was locally gated using
a scanning gate probe.Comment: 5 pages, 5 figure
- …