232 research outputs found
Stellar Content from high resolution galactic spectra via Maximum A Posteriori
This paper describes STECMAP (STEllar Content via Maximum A Posteriori), a
flexible, non-parametric inversion method for the interpretation of the
integrated light spectra of galaxies, based on synthetic spectra of single
stellar populations (SSPs). We focus on the recovery of a galaxy's star
formation history and stellar age-metallicity relation. We use the high
resolution SSPs produced by PEGASE-HR to quantify the informational content of
the wavelength range 4000 - 6800 Angstroms.
A detailed investigation of the properties of the corresponding simplified
linear problem is performed using singular value decomposition. It turns out to
be a powerful tool for explaining and predicting the behaviour of the
inversion. We provide means of quantifying the fundamental limitations of the
problem considering the intrinsic properties of the SSPs in the spectral range
of interest, as well as the noise in these models and in the data.
We performed a systematic simulation campaign and found that, when the time
elapsed between two bursts of star formation is larger than 0.8 dex, the
properties of each episode can be constrained with a precision of 0.04 dex in
age and 0.02 dex in metallicity from high quality data (R=10 000,
signal-to-noise ratio SNR=100 per pixel), not taking model errors into account.
The described methods and error estimates will be useful in the design and in
the analysis of extragalactic spectroscopic surveys.Comment: 31 pages, 23 figures, accepted for publication in MNRA
Deconvolution of Images from BLAST 2005: Insight into the K3-50 and IC 5146 Star-forming Regions
We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Compared to the direct Fourier transform method of deconvolution, the L-R operation restores images with better-controlled background noise and increases source detectability. Intermediate iterated images are useful for studying extended diffuse structures, while the later iterations truly enhance point sources to near the designed diffraction limit of the telescope. The L-R method of deconvolution is efficient in resolving compact sources in crowded regions while simultaneously conserving their respective flux densities. We have analyzed its performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available high-resolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4'.5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the spectral energy distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationships between dust continuum emission and both 21 cm radio continuum and ^(12)CO molecular line emission. The restored extended large-scale structures in the Northern Streamer of IC 5146 have a strong spatial correlation with both SCUBA and high-resolution extinction images. A dust temperature of 12 K has been obtained for the central filament. We report physical properties of ten compact sources, including six associated protostars, by fitting SEDs to multi-wavelength data. All of these compact sources are still quite cold (typical temperature below ~ 16 K) and are above the critical Bonner-Ebert mass. They have associated low-power young stellar objects. Further evidence for starless clumps has also been found in the IC 5146 region
No-reference image quality assessment through the von Mises distribution
An innovative way of calculating the von Mises distribution (VMD) of image
entropy is introduced in this paper. The VMD's concentration parameter and some
fitness parameter that will be later defined, have been analyzed in the
experimental part for determining their suitability as a image quality
assessment measure in some particular distortions such as Gaussian blur or
additive Gaussian noise. To achieve such measure, the local R\'{e}nyi entropy
is calculated in four equally spaced orientations and used to determine the
parameters of the von Mises distribution of the image entropy. Considering
contextual images, experimental results after applying this model show that the
best-in-focus noise-free images are associated with the highest values for the
von Mises distribution concentration parameter and the highest approximation of
image data to the von Mises distribution model. Our defined von Misses fitness
parameter experimentally appears also as a suitable no-reference image quality
assessment indicator for no-contextual images.Comment: 29 pages, 11 figure
Application of a damped Locally Optimized Combination of Images method to the spectral characterization of faint companions using an Integral Field Spectrograph
High-contrast imaging instruments are now being equipped with integral field
spectrographs (IFS) to facilitate the detection and characterization of faint
substellar companions. Algorithms currently envisioned to handle IFS data, such
as the Locally Optimized Combination of Images (LOCI) algorithm, rely upon
aggressive point-spread-function (PSF) subtraction, which is ideal for
initially identifying companions but results in significantly biased photometry
and spectroscopy due to unwanted mixing with residual starlight. This
spectro-photometric issue is further complicated by the fact that algorithmic
color response is a function of the companion's spectrum, making it difficult
to calibrate the effects of the reduction without using iterations involving a
series of injected synthetic companions. In this paper, we introduce a new PSF
calibration method, which we call "damped LOCI", that seeks to alleviate these
concerns. By modifying the cost function that determines the weighting
coefficients used to construct PSF reference images, and also forcing those
coefficients to be positive, it is possible to extract companion spectra with a
precision that is set by calibration of the instrument response and
transmission of the atmosphere, and not by post-processing. We demonstrate the
utility of this approach using on-sky data obtained with the Project 1640 IFS
at Palomar. Damped-LOCI does not require any iterations on the underlying
spectral type of the companion, nor does it rely upon priors involving the
chromatic and statistical properties of speckles. It is a general technique
that can readily be applied to other current and planned instruments that
employ IFS's.Comment: Accepted to the Astrophysical Journal Supplement
The Low End of the Supermassive Black Hole Mass Function: Constraining the Mass of a Nuclear Black Hole in NGC 205 via Stellar Kinematics
Hubble Space Telescope (HST) images and spectra of the nucleated dwarf
elliptical galaxy NGC 205 are combined with 3-integral axisymmetric dynamical
models to constrain the mass (M_BH) of a putative nuclear black hole. This is
only the second attempt, after M33, to use resolved stellar kinematics to
search for a nuclear black hole with mass below 10^6 solar masses. We are
unable to identify a best-fit value of M_BH in NGC 205; however, the data
impose a upper limit of 2.2x10^4 M_sun (1sigma confidence) and and upper limit
of 3.8x10^4 M_sun (3sigma confidence). This upper limit is consistent with the
extrapolation of the M_BH-sigma relation to the M_BH < 10^6 M_sunregime. If we
assume that NGC 205 and M33 both contain nuclear black holes, the upper limits
on M_BH in the two galaxies imply a slope of ~5.5 or greater for the M_BH-sigma
relation. We use our 3-integral models to evaluate the relaxation time (T_r)
and stellar collision time (T_coll) in NGC 205; T_r~10^8 yr or less in the
nucleus and T_coll~10^11 yr. The low value of T_r is consistent with core
collapse having already occurred, but we are unable to draw conclusions from
nuclear morphology about the presence or absence of a massive black hole.Comment: Latex emulateapj, 15 pages, 16 figures, Version accepted for
Publication in ApJ, 20 July 2005, v628. Minor changes to discussion
Modern optical astronomy: technology and impact of interferometry
The present `state of the art' and the path to future progress in high
spatial resolution imaging interferometry is reviewed. The review begins with a
treatment of the fundamentals of stellar optical interferometry, the origin,
properties, optical effects of turbulence in the Earth's atmosphere, the
passive methods that are applied on a single telescope to overcome atmospheric
image degradation such as speckle interferometry, and various other techniques.
These topics include differential speckle interferometry, speckle spectroscopy
and polarimetry, phase diversity, wavefront shearing interferometry,
phase-closure methods, dark speckle imaging, as well as the limitations imposed
by the detectors on the performance of speckle imaging. A brief account is
given of the technological innovation of adaptive-optics (AO) to compensate
such atmospheric effects on the image in real time. A major advancement
involves the transition from single-aperture to the dilute-aperture
interferometry using multiple telescopes. Therefore, the review deals with
recent developments involving ground-based, and space-based optical arrays.
Emphasis is placed on the problems specific to delay-lines, beam recombination,
polarization, dispersion, fringe-tracking, bootstrapping, coherencing and
cophasing, and recovery of the visibility functions. The role of AO in
enhancing visibilities is also discussed. The applications of interferometry,
such as imaging, astrometry, and nulling are described. The mathematical
intricacies of the various `post-detection' image-processing techniques are
examined critically. The review concludes with a discussion of the
astrophysical importance and the perspectives of interferometry.Comment: 65 pages LaTeX file including 23 figures. Reviews of Modern Physics,
2002, to appear in April issu
SARS-CoV-2 Wastewater Genomic Surveillance: Approaches, Challenges, and Opportunities
During the SARS-CoV-2 pandemic, wastewater-based genomic surveillance (WWGS)
emerged as an efficient viral surveillance tool that takes into account
asymptomatic cases and can identify known and novel mutations and offers the
opportunity to assign known virus lineages based on the detected mutations
profiles. WWGS can also hint towards novel or cryptic lineages, but it is
difficult to clearly identify and define novel lineages from wastewater (WW)
alone. While WWGS has significant advantages in monitoring SARS-CoV-2 viral
spread, technical challenges remain, including poor sequencing coverage and
quality due to viral RNA degradation. As a result, the viral RNAs in wastewater
have low concentrations and are often fragmented, making sequencing difficult.
WWGS analysis requires advanced computational tools that are yet to be
developed and benchmarked. The existing bioinformatics tools used to analyze
wastewater sequencing data are often based on previously developed methods for
quantifying the expression of transcripts or viral diversity. Those methods
were not developed for wastewater sequencing data specifically, and are not
optimized to address unique challenges associated with wastewater. While
specialized tools for analysis of wastewater sequencing data have also been
developed recently, it remains to be seen how they will perform given the
ongoing evolution of SARS-CoV-2 and the decline in testing and patient-based
genomic surveillance. Here, we discuss opportunities and challenges associated
with WWGS, including sample preparation, sequencing technology, and
bioinformatics methods.Comment: V Munteanu and M Saldana contributed equally to this work A Smith and
S Mangul jointly supervised this work For correspondence:
[email protected]
Polarimeter Blind Deconvolution Using Image Diversity
This research presents an algorithm that improves the ability to view objects using an electro-optical imaging system with at least one polarization sensitive channel in addition to the primary channel. An innovative algorithm for detection and estimation of the defocus aberration present in an image is also developed. Using a known defocus aberration, an iterative polarimeter deconvolution algorithm is developed using a generalized expectation-maximization (GEM) model. The polarimeter deconvolution algorithm is extended to an iterative polarimeter multiframe blind deconvolution (PMFBD) algorithm with an unknown aberration. Using both simulated and laboratory images, the results of the new PMFBD algorithm clearly outperforms an RL-based MFBD algorithm. The convergence rate is significantly faster with better fidelity of reproduction of the targets. Clearly, leveraging polarization data in electro-optical imaging systems has the potential to significantly improve the ability to resolve objects and, thus, improve Space Situation Awareness
- …