280 research outputs found
Combining Undersampled Dithered Images
Undersampled images, such as those produced by the HST WFPC-2, misrepresent
fine-scale structure intrinsic to the astronomical sources being imaged.
Analyzing such images is difficult on scales close to their resolution limits
and may produce erroneous results. A set of ``dithered'' images of an
astronomical source generally contains more information about its structure
than any single undersampled image, however, and may permit reconstruction of a
``superimage'' with Nyquist sampling. I present a tutorial on a method of image
reconstruction that builds a superimage from a complex linear combination of
the Fourier transforms of a set of undersampled dithered images. This method
works by algebraically eliminating the high order satellites in the periodic
transforms of the aliased images. The reconstructed image is an exact
representation of the data-set with no loss of resolution at the Nyquist scale.
The algorithm is directly derived from the theoretical properties of aliased
images and involves no arbitrary parameters, requiring only that the dithers
are purely translational and constant in pixel-space over the domain of the
object of interest. I show examples of its application to WFC and PC images. I
argue for its use when the best recovery of point sources or morphological
information at the HST diffraction limit is of interest.Comment: 22 pages, 9 EPS figures, submitted to PAS
The Photometry of Undersampled Point Spread Functions
An undersampled point spread function may interact with the microstructure of
a solid-state detector such that the total flux detected can depend sensitively
on where the PSF center falls within a pixel. Such intra-pixel sensitivity
variations will not be corrected by flat field calibration and may limit the
accuracy of stellar photometry conducted with undersampled images, as are
typical for Hubble Space Telescope observations. The total flux in a stellar
image can vary by up to 0.03 mag in F555W WFC images depending on how it is
sampled, for example. For NIC3, these variations are especially strong, up to
0.39 mag, strongly limiting its use for stellar photometry. Intra-pixel
sensitivity variations can be corrected for, however, by constructing a
well-sampled PSF from a dithered data set. The reconstructed PSF is the
convolution of the optical PSF with the pixel response. It can be evaluated at
any desired fractional pixel location to generate a table of photometric
corrections as a function of relative PSF centroid. A caveat is that the
centroid of an undersampled PSF can also be affected by the pixel response
function, thus sophisticated centroiding methods, such as cross-correlating the
observed PSF with its fully-sampled counterpart, are required to derive the
proper photometric correction.Comment: 20 pages, 14 postscript figures, submitted to the PAS
Destiny: A Candidate Architecture for the Joint Dark Energy Mission
Destiny is a simple, direct, low cost mission to determine the properties of
dark energy by obtaining a cosmologically deep supernova (SN) type Ia Hubble
diagram. Operated at L2, its science instrument is a 1.65m space telescope,
featuring a grism-fed near-infrared (NIR) (0.85-1.7micron) survey
camera/spectrometer with a 0.12 square degree field of view. During its
two-year primary mission, Destiny will detect, observe, and characterize ~3000
SN Ia events over the redshift interval 0.4<z<1.7 within a 3 square degree
survey area. In conjunction with ongoing ground-based SN Ia surveys for z<0.8,
Destiny mission data will be used to construct a high-precision Hubble diagram
and thereby constrain the dark energy equation of state from a time when it was
strongly matter-dominated to the present when dark energy dominates. The
grism-images simultaneously provide broad-band photometry, redshifts, and SN
classification, as well as time-resolved diagnostic data for investigating
additional SN luminosity diagnostics. Destiny will be used in its third year as
a high resolution, wide-field imager to conduct a multicolor NIR weak lensing
(WL) survey covering 1000 square degrees. The large-scale mass power spectrum
derived from weak lensing distortions of field galaxies as a function of
redshift will provide independent and complementary constraints on the dark
energy equation of state. The combination of SN and WL is much more powerful
than either technique on its own. Used together, these surveys will have more
than an order of magnitude greater sensitivity than will be provided by ongoing
ground-based projects. The dark energy parameters, w_0 and w_a, will be
measured to a precision of 0.05 and 0.2 respectively.Comment: Contains full color figure
Principal Component Analysis as a Tool for Characterizing Black Hole Images and Variability
We explore the use of principal component analysis (PCA) to characterize
high-fidelity simulations and interferometric observations of the millimeter
emission that originates near the horizons of accreting black holes. We show
mathematically that the Fourier transforms of eigenimages derived from PCA
applied to an ensemble of images in the spatial-domain are identical to the
eigenvectors of PCA applied to the ensemble of the Fourier transforms of the
images, which suggests that this approach may be applied to modeling the sparse
interferometric Fourier-visibilities produced by an array such as the Event
Horizon Telescope (EHT). We also show that the simulations in the spatial
domain themselves can be compactly represented with a PCA-derived basis of
eigenimages allowing for detailed comparisons between variable observations and
time-dependent models, as well as for detection of outliers or rare events
within a time series of images. Furthermore, we demonstrate that the spectrum
of PCA eigenvalues is a diagnostic of the power spectrum of the structure and,
hence, of the underlying physical processes in the simulated and observed
images.Comment: 16 pages, 17 figures, submitted to Ap
Luminosity Function of Faint Globular Clusters in M87
We present the luminosity function to very faint magnitudes for the globular
clusters in M87, based on a 30 orbit \textit{Hubble Space Telescope (HST)}
WFPC2 imaging program. The very deep images and corresponding improved false
source rejection allow us to probe the mass function further beyond the
turnover than has been done before. We compare our luminosity function to those
that have been observed in the past, and confirm the similarity of the turnover
luminosity between M87 and the Milky Way. We also find with high statistical
significance that the M87 luminosity function is broader than that of the Milky
Way. We discuss how determining the mass function of the cluster system to low
masses can constrain theoretical models of the dynamical evolution of globular
cluster systems. Our mass function is consistent with the dependence of mass
loss on the initial cluster mass given by classical evaporation, and somewhat
inconsistent with newer proposals that have a shallower mass dependence. In
addition, the rate of mass loss is consistent with standard evaporation models,
and not with the much higher rates proposed by some recent studies of very
young cluster systems. We also find that the mass-size relation has very little
slope, indicating that there is almost no increase in the size of a cluster
with increasing mass.Comment: 22 pages, 5 figures, Accepted for publication in Ap
Brightest Cluster Galaxies at the Present Epoch
We have observed 433 z<=0.08 brightest cluster galaxies (BCGs) in a full-sky
survey of Abell clusters. The BCG Hubble diagram is consistent to within 2% of
a Omega_m=0.3, Lambda=0.7 Hubble relation. The L_m-alpha relation for BCGs,
which uses alpha, the log-slope of the BCG photometric curve of growth, to
predict metric luminosity, L_m, has 0.27 mag residuals. We measure central
stellar velocity dispersions, sigma, of the BCGs, finding the Faber-Jackson
relation to flatten as the metric aperture grows to include an increasing
fraction of the total BCG luminosity. A 3-parameter "metric plane" relation
using alpha and sigma together gives the best prediction of L_m, with 0.21 mag
residuals. The projected spatial offset, r_x, of BCGs from the X-ray-defined
cluster center is a gamma=-2.33 power-law over 1<r_x<10^3 kpc. The median
offset is ~10 kpc, but ~15% of the BCGs have r_x>100 kpc. The absolute
cluster-dispersion normalized BCG peculiar velocity |Delta V_1|/sigma_c follows
an exponential distribution with scale length 0.39+/-0.03. Both L_m and alpha
increase with sigma_c. The alpha parameter is further moderated by both the
spatial and velocity offset from the cluster center, with larger alpha
correlated with the proximity of the BCG to the cluster mean velocity or
potential center. At the same time, position in the cluster has little effect
on L_m. The luminosity difference between the BCG and second-ranked galaxy, M2,
increases as the peculiar velocity of the BCG within the cluster decreases.
Further, when M2 is a close luminosity "rival" of the BCG, the galaxy that is
closest to either the velocity or X-ray center of the cluster is most likely to
have the larger alpha. We conclude that the inner portions of the BCGs are
formed outside the cluster, but interactions in the heart of the galaxy cluster
grow and extend the envelopes of the BCGs.Comment: Accepted for publication in the Astrophysical Journa
On DESTINY Science Instrument Electrical and Electronics Subsystem Framework
Future space missions are going to require large focal planes with many sensing arrays and hundreds of millions of pixels all read out at high data rates'' . This will place unique demands on the electrical and electronics (EE) subsystem design and it will be critically important to have high technology readiness level (TRL) EE concepts ready to support such missions. One such omission is the Joint Dark Energy Mission (JDEM) charged with making precise measurements of the expansion rate of the universe to reveal vital clues about the nature of dark energy - a hypothetical form of energy that permeates all of space and tends to increase the rate of the expansion. One of three JDEM concept studies - the Dark Energy Space Telescope (DESTINY) was conducted in 2008 at the NASA's Goddard Space Flight Center (GSFC) in Greenbelt, Maryland. This paper presents the EE subsystem framework, which evolved from the DESTINY science instrument study. It describes the main challenges and implementation concepts related to the design of an EE subsystem featuring multiple focal planes populated with dozens of large arrays and millions of pixels. The focal planes are passively cooled to cryogenic temperatures (below 140 K). The sensor mosaic is controlled by a large number of Readout Integrated Circuits and Application Specific Integrated Circuits - the ROICs/ASICs in near proximity to their sensor focal planes. The ASICs, in turn, are serviced by a set of "warm" EE subsystem boxes performing Field Programmable Gate Array (FPGA) based digital signal processing (DSP) computations of complex algorithms, such as sampling-up-the-ramp algorithm (SUTR), over large volumes of fast data streams. The SUTR boxes are supported by the Instrument Control/Command and Data Handling box (ICDH Primary and Backup boxes) for lossless data compression, command and low volume telemetry handling, power conversion and for communications with the spacecraft. The paper outlines how the JDEM DESTINY concept instrument EE subsystem can be built now, a design; which is generally U.S. Government work not protected by U.S. copyright IEEEAC paper # 1429. Version 4. Updated October 19, 2009 applicable to a wide variety of missions using large focal planes with lar ge mosaics of sensors
The Image of the M87 Black Hole Reconstructed with PRIMO
We present a new reconstruction of the Event Horizon Telescope (EHT) image of
the M87 black hole from the 2017 data set. We use PRIMO, a novel
dictionary-learning based algorithm that uses high-fidelity simulations of
accreting black holes as a training set. By learning the correlations between
the different regions of the space of interferometric data, this approach
allows us to recover high-fidelity images even in the presence of sparse
coverage and reach the nominal resolution of the EHT array. The black hole
image comprises a thin bright ring with a diameter of as and a
fractional width that is at least a factor of two smaller than previously
reported. This improvement has important implications for measuring the mass of
the central black hole in M87 based on the EHT images.Comment: 7 pages, 5 figure
- …