1,763 research outputs found
Type Ia Supernova Light Curve Inference: Hierarchical Bayesian Analysis in the Near Infrared
We present a comprehensive statistical analysis of the properties of Type Ia
SN light curves in the near infrared using recent data from PAIRITEL and the
literature. We construct a hierarchical Bayesian framework, incorporating
several uncertainties including photometric error, peculiar velocities, dust
extinction and intrinsic variations, for coherent statistical inference. SN Ia
light curve inferences are drawn from the global posterior probability of
parameters describing both individual supernovae and the population conditioned
on the entire SN Ia NIR dataset. The logical structure of the hierarchical
model is represented by a directed acyclic graph. Fully Bayesian analysis of
the model and data is enabled by an efficient MCMC algorithm exploiting the
conditional structure using Gibbs sampling. We apply this framework to the
JHK_s SN Ia light curve data. A new light curve model captures the observed
J-band light curve shape variations. The intrinsic variances in peak absolute
magnitudes are: sigma(M_J) = 0.17 +/- 0.03, sigma(M_H) = 0.11 +/- 0.03, and
sigma(M_Ks) = 0.19 +/- 0.04. We describe the first quantitative evidence for
correlations between the NIR absolute magnitudes and J-band light curve shapes,
and demonstrate their utility for distance estimation. The average residual in
the Hubble diagram for the training set SN at cz > 2000 km/s is 0.10 mag. The
new application of bootstrap cross-validation to SN Ia light curve inference
tests the sensitivity of the model fit to the finite sample and estimates the
prediction error at 0.15 mag. These results demonstrate that SN Ia NIR light
curves are as effective as optical light curves, and, because they are less
vulnerable to dust absorption, they have great potential as precise and
accurate cosmological distance indicators.Comment: 24 pages, 15 figures, 4 tables. Accepted for publication in ApJ.
Corrected typo, added references, minor edit
Non-Parametric Learning for Monocular Visual Odometry
This thesis addresses the problem of incremental localization from visual information, a scenario commonly known as visual odometry. Current visual odometry algorithms are heavily dependent on camera calibration, using a pre-established geometric model to provide the transformation between input (optical flow estimates) and output (vehicle motion estimates) information. A novel approach to visual odometry is proposed in this thesis where the need for camera calibration, or even for a geometric model, is circumvented by the use of machine learning principles and techniques. A non-parametric Bayesian regression technique, the Gaussian Process (GP), is used to elect the most probable transformation function hypothesis from input to output, based on training data collected prior and during navigation. Other than eliminating the need for a geometric model and traditional camera calibration, this approach also allows for scale recovery even in a monocular configuration, and provides a natural treatment of uncertainties due to the probabilistic nature of GPs. Several extensions to the traditional GP framework are introduced and discussed in depth, and they constitute the core of the contributions of this thesis to the machine learning and robotics community. The proposed framework is tested in a wide variety of scenarios, ranging from urban and off-road ground vehicles to unconstrained 3D unmanned aircrafts. The results show a significant improvement over traditional visual odometry algorithms, and also surpass results obtained using other sensors, such as laser scanners and IMUs. The incorporation of these results to a SLAM scenario, using a Exact Sparse Information Filter (ESIF), is shown to decrease global uncertainty by exploiting revisited areas of the environment. Finally, a technique for the automatic segmentation of dynamic objects is presented, as a way to increase the robustness of image information and further improve visual odometry results
Non-Parametric Learning for Monocular Visual Odometry
This thesis addresses the problem of incremental localization from visual information, a scenario commonly known as visual odometry. Current visual odometry algorithms are heavily dependent on camera calibration, using a pre-established geometric model to provide the transformation between input (optical flow estimates) and output (vehicle motion estimates) information. A novel approach to visual odometry is proposed in this thesis where the need for camera calibration, or even for a geometric model, is circumvented by the use of machine learning principles and techniques. A non-parametric Bayesian regression technique, the Gaussian Process (GP), is used to elect the most probable transformation function hypothesis from input to output, based on training data collected prior and during navigation. Other than eliminating the need for a geometric model and traditional camera calibration, this approach also allows for scale recovery even in a monocular configuration, and provides a natural treatment of uncertainties due to the probabilistic nature of GPs. Several extensions to the traditional GP framework are introduced and discussed in depth, and they constitute the core of the contributions of this thesis to the machine learning and robotics community. The proposed framework is tested in a wide variety of scenarios, ranging from urban and off-road ground vehicles to unconstrained 3D unmanned aircrafts. The results show a significant improvement over traditional visual odometry algorithms, and also surpass results obtained using other sensors, such as laser scanners and IMUs. The incorporation of these results to a SLAM scenario, using a Exact Sparse Information Filter (ESIF), is shown to decrease global uncertainty by exploiting revisited areas of the environment. Finally, a technique for the automatic segmentation of dynamic objects is presented, as a way to increase the robustness of image information and further improve visual odometry results
Non-parametric Estimation of Stochastic Differential Equations with Sparse Gaussian Processes
The application of Stochastic Differential Equations (SDEs) to the analysis
of temporal data has attracted increasing attention, due to their ability to
describe complex dynamics with physically interpretable equations. In this
paper, we introduce a non-parametric method for estimating the drift and
diffusion terms of SDEs from a densely observed discrete time series. The use
of Gaussian processes as priors permits working directly in a function-space
view and thus the inference takes place directly in this space. To cope with
the computational complexity that requires the use of Gaussian processes, a
sparse Gaussian process approximation is provided. This approximation permits
the efficient computation of predictions for the drift and diffusion terms by
using a distribution over a small subset of pseudo-samples. The proposed method
has been validated using both simulated data and real data from economy and
paleoclimatology. The application of the method to real data demonstrates its
ability to capture the behaviour of complex systems
Stochastic Variational Inference with Gradient Linearization
Variational inference has experienced a recent surge in popularity owing to
stochastic approaches, which have yielded practical tools for a wide range of
model classes. A key benefit is that stochastic variational inference obviates
the tedious process of deriving analytical expressions for closed-form variable
updates. Instead, one simply needs to derive the gradient of the log-posterior,
which is often much easier. Yet for certain model classes, the log-posterior
itself is difficult to optimize using standard gradient techniques. One such
example are random field models, where optimization based on gradient
linearization has proven popular, since it speeds up convergence significantly
and can avoid poor local optima. In this paper we propose stochastic
variational inference with gradient linearization (SVIGL). It is similarly
convenient as standard stochastic variational inference - all that is required
is a local linearization of the energy gradient. Its benefit over stochastic
variational inference with conventional gradient methods is a clear improvement
in convergence speed, while yielding comparable or even better variational
approximations in terms of KL divergence. We demonstrate the benefits of SVIGL
in three applications: Optical flow estimation, Poisson-Gaussian denoising, and
3D surface reconstruction.Comment: To appear at CVPR 201
Most Likely Separation of Intensity and Warping Effects in Image Registration
This paper introduces a class of mixed-effects models for joint modeling of
spatially correlated intensity variation and warping variation in 2D images.
Spatially correlated intensity variation and warp variation are modeled as
random effects, resulting in a nonlinear mixed-effects model that enables
simultaneous estimation of template and model parameters by optimization of the
likelihood function. We propose an algorithm for fitting the model which
alternates estimation of variance parameters and image registration. This
approach avoids the potential estimation bias in the template estimate that
arises when treating registration as a preprocessing step. We apply the model
to datasets of facial images and 2D brain magnetic resonance images to
illustrate the simultaneous estimation and prediction of intensity and warp
effects
The Eccentricity Distribution of Short-Period Planet Candidates Detected by Kepler in Occultation
We characterize the eccentricity distribution of a sample of ~50 short-period
planet candidates using transit and occultation measurements from NASA's Kepler
Mission. First, we evaluate the sensitivity of our hierarchical Bayesian
modeling and test its robustness to model misspecification using simulated
data. When analyzing actual data assuming a Rayleigh distribution for
eccentricity, we find that the posterior mode for the dispersion parameter is
. We find that a two-component Gaussian
mixture model for and provides a better model
than either a Rayleigh or Beta distribution. Based on our favored model, we
find that of planet candidates in our sample come from a population
with an eccentricity distribution characterized by a small dispersion
(), and come from a population with a larger dispersion
(). Finally, we investigate how the eccentricity distribution
correlates with selected planet and host star parameters. We find evidence that
suggests systems around higher metallicity stars and planet candidates with
smaller radii come from a more complex eccentricity distribution.Comment: Accepted for publication in Ap
- …