14,386 research outputs found
Bayesian Methods for Exoplanet Science
Exoplanet research is carried out at the limits of the capabilities of
current telescopes and instruments. The studied signals are weak, and often
embedded in complex systematics from instrumental, telluric, and astrophysical
sources. Combining repeated observations of periodic events, simultaneous
observations with multiple telescopes, different observation techniques, and
existing information from theory and prior research can help to disentangle the
systematics from the planetary signals, and offers synergistic advantages over
analysing observations separately. Bayesian inference provides a
self-consistent statistical framework that addresses both the necessity for
complex systematics models, and the need to combine prior information and
heterogeneous observations. This chapter offers a brief introduction to
Bayesian inference in the context of exoplanet research, with focus on time
series analysis, and finishes with an overview of a set of freely available
programming libraries.Comment: Invited revie
Pseudospectral Model Predictive Control under Partially Learned Dynamics
Trajectory optimization of a controlled dynamical system is an essential part
of autonomy, however many trajectory optimization techniques are limited by the
fidelity of the underlying parametric model. In the field of robotics, a lack
of model knowledge can be overcome with machine learning techniques, utilizing
measurements to build a dynamical model from the data. This paper aims to take
the middle ground between these two approaches by introducing a semi-parametric
representation of the underlying system dynamics. Our goal is to leverage the
considerable information contained in a traditional physics based model and
combine it with a data-driven, non-parametric regression technique known as a
Gaussian Process. Integrating this semi-parametric model with model predictive
pseudospectral control, we demonstrate this technique on both a cart pole and
quadrotor simulation with unmodeled damping and parametric error. In order to
manage parametric uncertainty, we introduce an algorithm that utilizes Sparse
Spectrum Gaussian Processes (SSGP) for online learning after each rollout. We
implement this online learning technique on a cart pole and quadrator, then
demonstrate the use of online learning and obstacle avoidance for the dubin
vehicle dynamics.Comment: Accepted but withdrawn from AIAA Scitech 201
Learning Discriminative Stein Kernel for SPD Matrices and Its Applications
Stein kernel has recently shown promising performance on classifying images
represented by symmetric positive definite (SPD) matrices. It evaluates the
similarity between two SPD matrices through their eigenvalues. In this paper,
we argue that directly using the original eigenvalues may be problematic
because: i) Eigenvalue estimation becomes biased when the number of samples is
inadequate, which may lead to unreliable kernel evaluation; ii) More
importantly, eigenvalues only reflect the property of an individual SPD matrix.
They are not necessarily optimal for computing Stein kernel when the goal is to
discriminate different sets of SPD matrices. To address the two issues in one
shot, we propose a discriminative Stein kernel, in which an extra parameter
vector is defined to adjust the eigenvalues of the input SPD matrices. The
optimal parameter values are sought by optimizing a proxy of classification
performance. To show the generality of the proposed method, three different
kernel learning criteria that are commonly used in the literature are employed
respectively as a proxy. A comprehensive experimental study is conducted on a
variety of image classification tasks to compare our proposed discriminative
Stein kernel with the original Stein kernel and other commonly used methods for
evaluating the similarity between SPD matrices. The experimental results
demonstrate that, the discriminative Stein kernel can attain greater
discrimination and better align with classification tasks by altering the
eigenvalues. This makes it produce higher classification performance than the
original Stein kernel and other commonly used methods.Comment: 13 page
A survey of outlier detection methodologies
Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review
Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data
Due to its causal semantics, Bayesian networks (BN) have been widely employed
to discover the underlying data relationship in exploratory studies, such as
brain research. Despite its success in modeling the probability distribution of
variables, BN is naturally a generative model, which is not necessarily
discriminative. This may cause the ignorance of subtle but critical network
changes that are of investigation values across populations. In this paper, we
propose to improve the discriminative power of BN models for continuous
variables from two different perspectives. This brings two general
discriminative learning frameworks for Gaussian Bayesian networks (GBN). In the
first framework, we employ Fisher kernel to bridge the generative models of GBN
and the discriminative classifiers of SVMs, and convert the GBN parameter
learning to Fisher kernel learning via minimizing a generalization error bound
of SVMs. In the second framework, we employ the max-margin criterion and build
it directly upon GBN models to explicitly optimize the classification
performance of the GBNs. The advantages and disadvantages of the two frameworks
are discussed and experimentally compared. Both of them demonstrate strong
power in learning discriminative parameters of GBNs for neuroimaging based
brain network analysis, as well as maintaining reasonable representation
capacity. The contributions of this paper also include a new Directed Acyclic
Graph (DAG) constraint with theoretical guarantee to ensure the graph validity
of GBN.Comment: 16 pages and 5 figures for the article (excluding appendix
- …