645 research outputs found
A Noninformative Prior on a Space of Distribution Functions
In a given problem, the Bayesian statistical paradigm requires the
specification of a prior distribution that quantifies relevant information
about the unknowns of main interest external to the data. In cases where little
such information is available, the problem under study may possess an
invariance under a transformation group that encodes a lack of information,
leading to a unique prior---this idea was explored at length by E.T. Jaynes.
Previous successful examples have included location-scale invariance under
linear transformation, multiplicative invariance of the rate at which events in
a counting process are observed, and the derivation of the Haldane prior for a
Bernoulli success probability. In this paper we show that this method can be
extended, by generalizing Jaynes, in two ways: (1) to yield families of
approximately invariant priors, and (2) to the infinite-dimensional setting,
yielding families of priors on spaces of distribution functions. Our results
can be used to describe conditions under which a particular Dirichlet Process
posterior arises from an optimal Bayesian analysis, in the sense that
invariances in the prior and likelihood lead to one and only one posterior
distribution
Eukaryotic translation initiation machinery can operate in a prokaryotic-like mode without eIF2
Unlike prokaryotes, a specialized eukaryotic initiation factor 2 (eIF2), in the form of the ternary complex eIF2*GTP*Met-tRNAiMet is utilized to deliver the initiator tRNA to the ribosome within all eukaryotic cells1. Phosphorylation of eIF2 is known to be central to the global regulation of protein synthesis under stress conditions and infection2. Another distinctive feature of eukaryotic translation is scanning of mRNA 5'-leaders, whose origin in evolution may be relevant to the appearance of eIF2 in eukaryotes. Translation initiation on the hepatitis C virus (HCV) internal ribosome entry site (IRES) occurs without scanning3,4. Whether these unique features of the HCV IRES account for the formation of the final 80S initiation complex is unknown. Here we show that the HCV IRES-directed translation can occur without either eIF2 or its GTPase activating protein eIF5. In addition to the general eIF2- and eIF5-dependent pathway of 80S complex assembly, the HCV IRES makes use of a prokaryotic-like pathway which involves eIF5B, the analogue of bacterial IF25,6, instead of eIF2. This switch from a eukaryotic-like mode of AUG selection to a "bacterial" one occurs when eIF2 is inactivated by phosphorylation, a way with which host cells counteract infection. The relative resistance of HCV IRES-directed translation to eIF2 phosphorylation may represent one more line of defense used by this virus against host antiviral responses and can contribute to the well known resistance of HCV to interferon based therapy
Learning Contact Dynamics using Physically Structured Neural Networks
Learning physically structured representations of dynamical systems that include contact between different objects is an important problem for learning-based approaches
in robotics. Black-box neural networks can
learn to approximately represent discontinuous dynamics, but they typically require large
quantities of data and often suffer from pathological behaviour when forecasting for longer
time horizons. In this work, we use connections between deep neural networks and differential equations to design a family of deep
network architectures for representing contact
dynamics between objects. We show that
these networks can learn discontinuous contact events in a data-efficient manner from
noisy observations in settings that are traditionally difficult for black-box approaches
and recent physics inspired neural networks.
Our results indicate that an idealised form of
touch feedback—which is heavily relied upon
by biological systems—is a key component of
making this learning problem tractable. Together with the inductive biases introduced
through the network architectures, our techniques enable accurate learning of contact
dynamics from observations
Stationary Kernels and Gaussian Processes on Lie Groups and their Homogeneous Spaces II: non-compact symmetric spaces
Gaussian processes are arguably the most important class of spatiotemporal
models within machine learning. They encode prior information about the modeled
function and can be used for exact or approximate Bayesian learning. In many
applications, particularly in physical sciences and engineering, but also in
areas such as geostatistics and neuroscience, invariance to symmetries is one
of the most fundamental forms of prior information one can consider. The
invariance of a Gaussian process' covariance to such symmetries gives rise to
the most natural generalization of the concept of stationarity to such spaces.
In this work, we develop constructive and practical techniques for building
stationary Gaussian processes on a very large class of non-Euclidean spaces
arising in the context of symmetries. Our techniques make it possible to (i)
calculate covariance kernels and (ii) sample from prior and posterior Gaussian
processes defined on such spaces, both in a practical manner. This work is
split into two parts, each involving different technical considerations: part I
studies compact spaces, while part II studies non-compact spaces possessing
certain structure. Our contributions make the non-Euclidean Gaussian process
models we study compatible with well-understood computational techniques
available in standard Gaussian process software packages, thereby making them
accessible to practitioners
- …