40,276 research outputs found
An Emulator for the Lyman-alpha Forest
We present methods for interpolating between the 1-D flux power spectrum of
the Lyman- forest, as output by cosmological hydrodynamic simulations.
Interpolation is necessary for cosmological parameter estimation due to the
limited number of simulations possible. We construct an emulator for the
Lyman- forest flux power spectrum from small simulations using
Latin hypercube sampling and Gaussian process interpolation. We show that this
emulator has a typical accuracy of 1.5% and a worst-case accuracy of 4%, which
compares well to the current statistical error of 3 - 5% at from BOSS
DR9. We compare to the previous state of the art, quadratic polynomial
interpolation. The Latin hypercube samples the entire volume of parameter
space, while quadratic polynomial emulation samples only lower-dimensional
subspaces. The Gaussian process provides an estimate of the emulation error and
we show using test simulations that this estimate is reasonable. We construct a
likelihood function and use it to show that the posterior constraints generated
using the emulator are unbiased. We show that our Gaussian process emulator has
lower emulation error than quadratic polynomial interpolation and thus produces
tighter posterior confidence intervals, which will be essential for future
Lyman- surveys such as DESI.Comment: 28 pages, 10 figures, accepted to JCAP with minor change
Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science
As the field of data science continues to grow, there will be an
ever-increasing demand for tools that make machine learning accessible to
non-experts. In this paper, we introduce the concept of tree-based pipeline
optimization for automating one of the most tedious parts of machine
learning---pipeline design. We implement an open source Tree-based Pipeline
Optimization Tool (TPOT) in Python and demonstrate its effectiveness on a
series of simulated and real-world benchmark data sets. In particular, we show
that TPOT can design machine learning pipelines that provide a significant
improvement over a basic machine learning analysis while requiring little to no
input nor prior knowledge from the user. We also address the tendency for TPOT
to design overly complex pipelines by integrating Pareto optimization, which
produces compact pipelines without sacrificing classification accuracy. As
such, this work represents an important step toward fully automating machine
learning pipeline design.Comment: 8 pages, 5 figures, preprint to appear in GECCO 2016, edits not yet
made from reviewer comment
Mondrian Forests for Large-Scale Regression when Uncertainty Matters
Many real-world regression problems demand a measure of the uncertainty
associated with each prediction. Standard decision forests deliver efficient
state-of-the-art predictive performance, but high-quality uncertainty estimates
are lacking. Gaussian processes (GPs) deliver uncertainty estimates, but
scaling GPs to large-scale data sets comes at the cost of approximating the
uncertainty estimates. We extend Mondrian forests, first proposed by
Lakshminarayanan et al. (2014) for classification problems, to the large-scale
non-parametric regression setting. Using a novel hierarchical Gaussian prior
that dovetails with the Mondrian forest framework, we obtain principled
uncertainty estimates, while still retaining the computational advantages of
decision forests. Through a combination of illustrative examples, real-world
large-scale datasets, and Bayesian optimization benchmarks, we demonstrate that
Mondrian forests outperform approximate GPs on large-scale regression tasks and
deliver better-calibrated uncertainty assessments than decision-forest-based
methods.Comment: Proceedings of the 19th International Conference on Artificial
Intelligence and Statistics (AISTATS) 2016, Cadiz, Spain. JMLR: W&CP volume
5
- …