9,821 research outputs found
The X-ray emission of the gamma Cassiopeiae stars
Long considered as the "odd man out" among X-ray emitting Be stars, \gamma
Cas (B0.5e IV) is now recognized as the prototype of a class of stars that emit
hard thermal X-rays. Our classification differs from the historical use of the
term "gamma Cas stars" defined from optical properties alone. The luminosity
output of this class contributes significantly to the hard X-ray production in
massive stars in the Galaxy. The gamma Cas stars have light curves showing
variability on a few broadly-defined timescales and spectra indicative of an
optically thin plasma consisting of one or more hot thermal components. By now
9--13 Galactic \approx B0-1.5e main sequence stars are judged to be members or
candidate members of the \gamma Cas class. Conservative criteria for this
designation are for a \approxB0-1.5e III-V star to have an X-ray luminosity of
10^{32}--10^{33} ergs s^{-1}, a hot thermal spectrum containing the short
wavelength Ly \alpha FeXXV and FeXXVI lines and the fluorescence FeK feature
all in emission. If thermality cannot be demonstrated, for example from either
the presence of these Ly \alpha lines or curvature of the hard continuum; these
are the gamma Cas candidates. We discuss the history of the discovery of the
complicated characteristics of the variability in the optical, UV, and X-ray
domains, leading to suggestions for the physical cause of the production of
hard X-rays. These include scenarios in which matter from the Be star accretes
onto a degenerate secondary star and interactions between magnetic fields on
the Be star and its decretion disk. The greatest aid to the choice of the
causal mechanism is the temporal correlations of X-ray light curves and spectra
with diagnostics in the optical and UV wavebands. We show why the magnetic
star-disk interaction scenario is the most tenable explanation for the creation
of hard X-rays on these stars.Comment: Review paper for "X-ray Emissions from Hot Stars and their Winds"
compendium to be published by Advances in Space Research in mid-2016. Paper
is comprised of 66 pages, 15 figure
Towards Data-Driven Autonomics in Data Centers
Continued reliance on human operators for managing data centers is a major
impediment for them from ever reaching extreme dimensions. Large computer
systems in general, and data centers in particular, will ultimately be managed
using predictive computational and executable models obtained through
data-science tools, and at that point, the intervention of humans will be
limited to setting high-level goals and policies rather than performing
low-level operations. Data-driven autonomics, where management and control are
based on holistic predictive models that are built and updated using generated
data, opens one possible path towards limiting the role of operators in data
centers. In this paper, we present a data-science study of a public Google
dataset collected in a 12K-node cluster with the goal of building and
evaluating a predictive model for node failures. We use BigQuery, the big data
SQL platform from the Google Cloud suite, to process massive amounts of data
and generate a rich feature set characterizing machine state over time. We
describe how an ensemble classifier can be built out of many Random Forest
classifiers each trained on these features, to predict if machines will fail in
a future 24-hour window. Our evaluation reveals that if we limit false positive
rates to 5%, we can achieve true positive rates between 27% and 88% with
precision varying between 50% and 72%. We discuss the practicality of including
our predictive model as the central component of a data-driven autonomic
manager and operating it on-line with live data streams (rather than off-line
on data logs). All of the scripts used for BigQuery and classification analyses
are publicly available from the authors' website.Comment: 12 pages, 6 figure
Stellar Intensity Interferometry: Prospects for sub-milliarcsecond optical imaging
Using kilometric arrays of air Cherenkov telescopes, intensity interferometry
may increase the spatial resolution in optical astronomy by an order of
magnitude, enabling images of rapidly rotating stars with structures in their
circumstellar disks and winds, or mapping out patterns of nonradial pulsations
across stellar surfaces. Intensity interferometry (pioneered by Hanbury Brown
and Twiss) connects telescopes only electronically, and is practically
insensitive to atmospheric turbulence and optical imperfections, permitting
observations over long baselines and through large airmasses, also at short
optical wavelengths. The required large telescopes with very fast detectors are
becoming available as arrays of air Cherenkov telescopes, distributed over a
few square km. Digital signal handling enables very many baselines to be
synthesized, while stars are tracked with electronic time delays, thus
synthesizing an optical interferometer in software. Simulated observations
indicate limiting magnitudes around m(v)=8, reaching resolutions ~30
microarcsec in the violet. The signal-to-noise ratio favors high-temperature
sources and emission-line structures, and is independent of the optical
passband, be it a single spectral line or the broad spectral continuum.
Intensity interferometry provides the modulus (but not phase) of any spatial
frequency component of the source image; for this reason image reconstruction
requires phase retrieval techniques, feasible if sufficient coverage of the
interferometric (u,v)-plane is available. Experiments are in progress; test
telescopes have been erected, and trials in connecting large Cherenkov
telescopes have been carried out. This paper reviews this interferometric
method in view of the new possibilities offered by arrays of air Cherenkov
telescopes, and outlines observational programs that should become realistic
already in the rather near future.Comment: New Astronomy Reviews, in press; 101 pages, 11 figures, 185
reference
- …