40 research outputs found
Decomposing feature-level variation with Covariate Gaussian Process Latent Variable Models
The interpretation of complex high-dimensional data typically requires the
use of dimensionality reduction techniques to extract explanatory
low-dimensional representations. However, in many real-world problems these
representations may not be sufficient to aid interpretation on their own, and
it would be desirable to interpret the model in terms of the original features
themselves. Our goal is to characterise how feature-level variation depends on
latent low-dimensional representations, external covariates, and non-linear
interactions between the two. In this paper, we propose to achieve this through
a structured kernel decomposition in a hybrid Gaussian Process model which we
call the Covariate Gaussian Process Latent Variable Model (c-GPLVM). We
demonstrate the utility of our model on simulated examples and applications in
disease progression modelling from high-dimensional gene expression data in the
presence of additional phenotypes. In each setting we show how the c-GPLVM can
extract low-dimensional structures from high-dimensional data sets whilst
allowing a breakdown of feature-level variability that is not present in other
commonly used dimensionality reduction approaches
First 230 GHz VLBI Fringes on 3C 279 using the APEX Telescope
We report about a 230 GHz very long baseline interferometry (VLBI) fringe
finder observation of blazar 3C 279 with the APEX telescope in Chile, the
phased submillimeter array (SMA), and the SMT of the Arizona Radio Observatory
(ARO). We installed VLBI equipment and measured the APEX station position to 1
cm accuracy (1 sigma). We then observed 3C 279 on 2012 May 7 in a 5 hour 230
GHz VLBI track with baseline lengths of 2800 M to 7200 M and
a finest fringe spacing of 28.6 micro-arcseconds. Fringes were detected on all
baselines with SNRs of 12 to 55 in 420 s. The correlated flux density on the
longest baseline was ~0.3 Jy/beam, out of a total flux density of 19.8 Jy.
Visibility data suggest an emission region <38 uas in size, and at least two
components, possibly polarized. We find a lower limit of the brightness
temperature of the inner jet region of about 10^10 K. Lastly, we find an upper
limit of 20% on the linear polarization fraction at a fringe spacing of ~38
uas. With APEX the angular resolution of 230 GHz VLBI improves to 28.6 uas.
This allows one to resolve the last-photon ring around the Galactic Center
black hole event horizon, expected to be 40 uas in diameter, and probe radio
jet launching at unprecedented resolution, down to a few gravitational radii in
galaxies like M 87. To probe the structure in the inner parsecs of 3C 279 in
detail, follow-up observations with APEX and five other mm-VLBI stations have
been conducted (March 2013) and are being analyzed.Comment: accepted for publication in A&
Bayesian statistics and modelling
Bayesian statistics is an approach to data analysis based on Bayesâ theorem, where available knowledge about parameters in a statistical model is updated with the information in observed data. The background knowledge is expressed as a prior distribution and combined with observational data in the form of a likelihood function to determine the posterior distribution. The posterior can also be used for making predictions about future events. This Primer describes the stages involved in Bayesian analysis, from specifying the prior and data models to deriving inference, model checking and refinement. We discuss the importance of prior and posterior predictive checking, selecting a proper technique for sampling from a posterior distribution, variational inference and variable selection. Examples of successful applications of Bayesian analysis across various research fields are provided, including in social sciences, ecology, genetics, medicine and more. We propose strategies for reproducibility and reporting standards, outlining an updated WAMBS (when to Worry and how to Avoid the Misuse of Bayesian Statistics) checklist. Finally, we outline the impact of Bayesian analysis on artificial intelligence, a major goal in the next decade
De Tribvs In Terra Testibvs Ad I. Joh. V. 8 / Praesidente Jo. Lavr. Moshemio S. S. Theol. D. Ad D. Et P. P. Ad D. VII. Jvlii M D CCXXV. ... Pvblice Dispvtabit Avctor Henr. Richard. Maertens S. Litter. Cvltor
DE TRIBVS IN TERRA TESTIBVS AD I. JOH. V. 8 / PRAESIDENTE JO. LAVR. MOSHEMIO S. S. THEOL. D. AD D. ET P. P. AD D. VII. JVLII M D CCXXV. ... PVBLICE DISPVTABIT AVCTOR HENR. RICHARD. MAERTENS S. LITTER. CVLTOR
De Tribvs In Terra Testibvs Ad I. Joh. V. 8 / Praesidente Jo. Lavr. Moshemio S. S. Theol. D. Ad D. Et P. P. Ad D. VII. Jvlii M D CCXXV. ... Pvblice Dispvtabit Avctor Henr. Richard. Maertens S. Litter. Cvltor (1)
Titelblatt (1)
De Tribvs In Terra Testibvs (3)
Beitrag (31
Bayesian statistics and modelling
Bayesian statistics is an approach to data analysis based on Bayesâ theorem, where available knowledge about parameters in a statistical model is updated with the information in observed data. The background knowledge is expressed as a prior distribution and combined with observational data in the form of a likelihood function to determine the posterior distribution. The posterior can also be used for making predictions about future events. This Primer describes the stages involved in Bayesian analysis, from specifying the prior and data models to deriving inference, model checking and refinement. We discuss the importance of prior and posterior predictive checking, selecting a proper technique for sampling from a posterior distribution, variational inference and variable selection. Examples of successful applications of Bayesian analysis across various research fields are provided, including in social sciences, ecology, genetics, medicine and more. We propose strategies for reproducibility and reporting standards, outlining an updated WAMBS (when to Worry and how to Avoid the Misuse of Bayesian Statistics) checklist. Finally, we outline the impact of Bayesian analysis on artificial intelligence, a major goal in the next decade
Bayesian statistics and modelling
Bayesian statistics is an approach to data analysis based on Bayesâ theorem, where available knowledge about parameters in a statistical model is updated with the information in observed data. The background knowledge is expressed as a prior distribution and combined with observational data in the form of a likelihood function to determine the posterior distribution. The posterior can also be used for making predictions about future events. This Primer describes the stages involved in Bayesian analysis, from specifying the prior and data models to deriving inference, model checking and refinement. We discuss the importance of prior and posterior predictive checking, selecting a proper technique for sampling from a posterior distribution, variational inference and variable selection. Examples of successful applications of Bayesian analysis across various research fields are provided, including in social sciences, ecology, genetics, medicine and more. We propose strategies for reproducibility and reporting standards, outlining an updated WAMBS (when to Worry and how to Avoid the Misuse of Bayesian Statistics) checklist. Finally, we outline the impact of Bayesian analysis on artificial intelligence, a major goal in the next decade