1,285 research outputs found
Point Spread Functions in Identification of Astronomical Objects from Poisson Noised Image
This article deals with modeling of astronomical objects, which is one of the most fundamental topics in astronomical science. Introduction part is focused on problem description and used methods. Point Spread Function Modeling part deals with description of basic models used in astronomical photometry and further on introduction of more sophisticated models such as combinations of interference, turbulence, focusing, etc. This paper also contains a~way of objective function definition based on the knowledge of Poisson distributed noise, which is included in astronomical data. The proposed methods are further applied to real astronomical data
The 1990 progress report and future plans
This document describes the progress and plans of the Artificial Intelligence Research Branch (RIA) at ARC in 1990. Activities span a range from basic scientific research to engineering development and to fielded NASA applications, particularly those applications that are enabled by basic research carried out at RIA. Work is conducted in-house and through collaborative partners in academia and industry. Our major focus is on a limited number of research themes with a dual commitment to technical excellence and proven applicability to NASA short, medium, and long-term problems. RIA acts as the Agency's lead organization for research aspects of artificial intelligence, working closely with a second research laboratory at JPL and AI applications groups at all NASA centers
Working Notes from the 1992 AAAI Spring Symposium on Practical Approaches to Scheduling and Planning
The symposium presented issues involved in the development of scheduling systems that can deal with resource and time limitations. To qualify, a system must be implemented and tested to some degree on non-trivial problems (ideally, on real-world problems). However, a system need not be fully deployed to qualify. Systems that schedule actions in terms of metric time constraints typically represent and reason about an external numeric clock or calendar and can be contrasted with those systems that represent time purely symbolically. The following topics are discussed: integrating planning and scheduling; integrating symbolic goals and numerical utilities; managing uncertainty; incremental rescheduling; managing limited computation time; anytime scheduling and planning algorithms, systems; dependency analysis and schedule reuse; management of schedule and plan execution; and incorporation of discrete event techniques
The ALMA Interferometric Pipeline Heuristics
We describe the calibration and imaging heuristics developed and deployed in
the ALMA interferometric data processing pipeline, as of ALMA Cycle 9. The
pipeline software framework is written in Python, with each data reduction
stage layered on top of tasks and toolkit functions provided by the Common
Astronomy Software Applications package. This framework supports a variety of
tasks for observatory operations, including science data quality assurance,
observing mode commissioning, and user reprocessing. It supports ALMA and VLA
interferometric data along with ALMA and NRO45m single dish data, via different
stages and heuristics. In addition to producing calibration tables, calibrated
measurement sets, and cleaned images, the pipeline creates a WebLog which
serves as the primary interface for verifying the data quality assurance by the
observatory and for examining the contents of the data by the user. Following
the adoption of the pipeline by ALMA Operations in 2014, the heuristics have
been refined through annual development cycles, culminating in a new pipeline
release aligned with the start of each ALMA Cycle of observations. Initial
development focused on basic calibration and flagging heuristics (Cycles 2-3),
followed by imaging heuristics (Cycles 4-5), refinement of the flagging and
imaging heuristics with parallel processing (Cycles 6-7), addition of the
moment difference analysis to improve continuum channel identification (2020
release), addition of a spectral renormalization stage (Cycle 8), and
improvement in low SNR calibration heuristics (Cycle 9). In the two most recent
Cycles, 97% of ALMA datasets were calibrated and imaged with the pipeline,
ensuring long-term automated reproducibility. We conclude with a brief
description of plans for future additions, including self-calibration,
multi-configuration imaging, and calibration and imaging of full polarization
data.Comment: accepted for publication by Publications of the Astronomical Society
of the Pacific, 65 pages, 20 figures, 10 tables, 2 appendice
An Application of Multi-band Forced Photometry to One Square Degree of SERVS: Accurate Photometric Redshifts and Implications for Future Science
We apply The Tractor image modeling code to improve upon existing multi-band
photometry for the Spitzer Extragalactic Representative Volume Survey (SERVS).
SERVS consists of post-cryogenic Spitzer observations at 3.6 and 4.5 micron
over five well-studied deep fields spanning 18 square degrees. In concert with
data from ground-based near-infrared (NIR) and optical surveys, SERVS aims to
provide a census of the properties of massive galaxies out to z ~ 5. To
accomplish this, we are using The Tractor to perform "forced photometry." This
technique employs prior measurements of source positions and surface brightness
profiles from a high-resolution fiducial band from the VISTA Deep Extragalactic
Observations (VIDEO) survey to model and fit the fluxes at lower-resolution
bands. We discuss our implementation of The Tractor over a square degree test
region within the XMM-LSS field with deep imaging in 12 NIR/optical bands. Our
new multi-band source catalogs offer a number of advantages over traditional
position-matched catalogs, including 1) consistent source cross-identification
between bands, 2) de-blending of sources that are clearly resolved in the
fiducial band but blended in the lower-resolution SERVS data, 3) a higher
source detection fraction in each band, 4) a larger number of candidate
galaxies in the redshift range 5 < z < 6, and 5) a statistically significant
improvement in the photometric redshift accuracy as evidenced by the
significant decrease in the fraction of outliers compared to spectroscopic
redshifts. Thus, forced photometry using The Tractor offers a means of
improving the accuracy of multi-band extragalactic surveys designed for galaxy
evolution studies. We will extend our application of this technique to the full
SERVS footprint in the future.Comment: accepted to ApJ, 22 pages, 12 figure
A Fast Quartet Tree Heuristic for Hierarchical Clustering
The Minimum Quartet Tree Cost problem is to construct an optimal weight tree
from the weighted quartet topologies on objects, where
optimality means that the summed weight of the embedded quartet topologies is
optimal (so it can be the case that the optimal tree embeds all quartets as
nonoptimal topologies). We present a Monte Carlo heuristic, based on randomized
hill climbing, for approximating the optimal weight tree, given the quartet
topology weights. The method repeatedly transforms a dendrogram, with all
objects involved as leaves, achieving a monotonic approximation to the exact
single globally optimal tree. The problem and the solution heuristic has been
extensively used for general hierarchical clustering of nontree-like
(non-phylogeny) data in various domains and across domains with heterogeneous
data. We also present a greatly improved heuristic, reducing the running time
by a factor of order a thousand to ten thousand. All this is implemented and
available, as part of the CompLearn package. We compare performance and running
time of the original and improved versions with those of UPGMA, BioNJ, and NJ,
as implemented in the SplitsTree package on genomic data for which the latter
are optimized.
Keywords: Data and knowledge visualization, Pattern
matching--Clustering--Algorithms/Similarity measures, Hierarchical clustering,
Global optimization, Quartet tree, Randomized hill-climbing,Comment: LaTeX, 40 pages, 11 figures; this paper has substantial overlap with
arXiv:cs/0606048 in cs.D
Computational statistics using the Bayesian Inference Engine
This paper introduces the Bayesian Inference Engine (BIE), a general
parallel, optimised software package for parameter inference and model
selection. This package is motivated by the analysis needs of modern
astronomical surveys and the need to organise and reuse expensive derived data.
The BIE is the first platform for computational statistics designed explicitly
to enable Bayesian update and model comparison for astronomical problems.
Bayesian update is based on the representation of high-dimensional posterior
distributions using metric-ball-tree based kernel density estimation. Among its
algorithmic offerings, the BIE emphasises hybrid tempered MCMC schemes that
robustly sample multimodal posterior distributions in high-dimensional
parameter spaces. Moreover, the BIE is implements a full persistence or
serialisation system that stores the full byte-level image of the running
inference and previously characterised posterior distributions for later use.
Two new algorithms to compute the marginal likelihood from the posterior
distribution, developed for and implemented in the BIE, enable model comparison
for complex models and data sets. Finally, the BIE was designed to be a
collaborative platform for applying Bayesian methodology to astronomy. It
includes an extensible object-oriented and easily extended framework that
implements every aspect of the Bayesian inference. By providing a variety of
statistical algorithms for all phases of the inference problem, a scientist may
explore a variety of approaches with a single model and data implementation.
Additional technical details and download details are available from
http://www.astro.umass.edu/bie. The BIE is distributed under the GNU GPL.Comment: Resubmitted version. Additional technical details and download
details are available from http://www.astro.umass.edu/bie. The BIE is
distributed under the GNU GP
- …