154,833 research outputs found
PynPoint: a modular pipeline architecture for processing and analysis of high-contrast imaging data
The direct detection and characterization of planetary and substellar
companions at small angular separations is a rapidly advancing field. Dedicated
high-contrast imaging instruments deliver unprecedented sensitivity, enabling
detailed insights into the atmospheres of young low-mass companions. In
addition, improvements in data reduction and PSF subtraction algorithms are
equally relevant for maximizing the scientific yield, both from new and
archival data sets. We aim at developing a generic and modular data reduction
pipeline for processing and analysis of high-contrast imaging data obtained
with pupil-stabilized observations. The package should be scalable and robust
for future implementations and in particular well suitable for the 3-5 micron
wavelength range where typically (ten) thousands of frames have to be processed
and an accurate subtraction of the thermal background emission is critical.
PynPoint is written in Python 2.7 and applies various image processing
techniques, as well as statistical tools for analyzing the data, building on
open-source Python packages. The current version of PynPoint has evolved from
an earlier version that was developed as a PSF subtraction tool based on PCA.
The architecture of PynPoint has been redesigned with the core functionalities
decoupled from the pipeline modules. Modules have been implemented for
dedicated processing and analysis steps, including background subtraction,
frame registration, PSF subtraction, photometric and astrometric measurements,
and estimation of detection limits. The pipeline package enables end-to-end
data reduction of pupil-stabilized data and supports classical dithering and
coronagraphic data sets. As an example, we processed archival VLT/NACO L' and
M' data of beta Pic b and reassessed the planet's brightness and position with
an MCMC analysis, and we provide a derivation of the photometric error budget.Comment: 16 pages, 9 figures, accepted for publication in A&A, PynPoint is
available at https://github.com/PynPoint/PynPoin
The Blanco Cosmology Survey: Data Acquisition, Processing, Calibration, Quality Diagnostics and Data Release
The Blanco Cosmology Survey (BCS) is a 60 night imaging survey of 80
deg of the southern sky located in two fields: (,)= (5 hr,
) and (23 hr, ). The survey was carried out between
2005 and 2008 in bands with the Mosaic2 imager on the Blanco 4m
telescope. The primary aim of the BCS survey is to provide the data required to
optically confirm and measure photometric redshifts for Sunyaev-Zel'dovich
effect selected galaxy clusters from the South Pole Telescope and the Atacama
Cosmology Telescope. We process and calibrate the BCS data, carrying out PSF
corrected model fitting photometry for all detected objects. The median
10 galaxy (point source) depths over the survey in are
approximately 23.3 (23.9), 23.4 (24.0), 23.0 (23.6) and 21.3 (22.1),
respectively. The astrometric accuracy relative to the USNO-B survey is
milli-arcsec. We calibrate our absolute photometry using the stellar
locus in bands, and thus our absolute photometric scale derives from
2MASS which has % accuracy. The scatter of stars about the stellar locus
indicates a systematics floor in the relative stellar photometric scatter in
that is 1.9%, 2.2%, 2.7% and2.7%, respectively.
A simple cut in the AstrOmatic star-galaxy classifier {\tt spread\_model}
produces a star sample with good spatial uniformity. We use the resulting
photometric catalogs to calibrate photometric redshifts for the survey and
demonstrate scatter with an outlier fraction %
to . We highlight some selected science results to date and provide a
full description of the released data products.Comment: 23 pages, 23 figures . Response to referee comments. Paper accepted
for publication. BCS catalogs and images available for download from
http://www.usm.uni-muenchen.de/BC
Image operator learning coupled with CNN classification and its application to staff line removal
Many image transformations can be modeled by image operators that are
characterized by pixel-wise local functions defined on a finite support window.
In image operator learning, these functions are estimated from training data
using machine learning techniques. Input size is usually a critical issue when
using learning algorithms, and it limits the size of practicable windows. We
propose the use of convolutional neural networks (CNNs) to overcome this
limitation. The problem of removing staff-lines in music score images is chosen
to evaluate the effects of window and convolutional mask sizes on the learned
image operator performance. Results show that the CNN based solution
outperforms previous ones obtained using conventional learning algorithms or
heuristic algorithms, indicating the potential of CNNs as base classifiers in
image operator learning. The implementations will be made available on the
TRIOSlib project site.Comment: To appear in ICDAR 201
- …