2,846 research outputs found
Both Ligand- and Cell-Specific Parameters Control Ligand Agonism in a Kinetic Model of G Protein–Coupled Receptor Signaling
G protein–coupled receptors (GPCRs) exist in multiple dynamic states (e.g., ligand-bound, inactive, G protein–coupled) that influence G protein activation and ultimately response generation. In quantitative models of GPCR signaling that incorporate these varied states, parameter values are often uncharacterized or varied over large ranges, making identification of important parameters and signaling outcomes difficult to intuit. Here we identify the ligand- and cell-specific parameters that are important determinants of cell-response behavior in a dynamic model of GPCR signaling using parameter variation and sensitivity analysis. The character of response (i.e., positive/neutral/inverse agonism) is, not surprisingly, significantly influenced by a ligand's ability to bias the receptor into an active conformation. We also find that several cell-specific parameters, including the ratio of active to inactive receptor species, the rate constant for G protein activation, and expression levels of receptors and G proteins also dramatically influence agonism. Expressing either receptor or G protein in numbers several fold above or below endogenous levels may result in system behavior inconsistent with that measured in endogenous systems. Finally, small variations in cell-specific parameters identified by sensitivity analysis as significant determinants of response behavior are found to change ligand-induced responses from positive to negative, a phenomenon termed protean agonism. Our findings offer an explanation for protean agonism reported in β2-adrenergic and α2A-adrenergic receptor systems
Variational Sequential Monte Carlo
Many recent advances in large scale probabilistic inference rely on
variational methods. The success of variational approaches depends on (i)
formulating a flexible parametric family of distributions, and (ii) optimizing
the parameters to find the member of this family that most closely approximates
the exact posterior. In this paper we present a new approximating family of
distributions, the variational sequential Monte Carlo (VSMC) family, and show
how to optimize it in variational inference. VSMC melds variational inference
(VI) and sequential Monte Carlo (SMC), providing practitioners with flexible,
accurate, and powerful Bayesian inference. The VSMC family is a variational
family that can approximate the posterior arbitrarily well, while still
allowing for efficient optimization of its parameters. We demonstrate its
utility on state space models, stochastic volatility models for financial data,
and deep Markov models of brain neural circuits
A Nonparametric Bayesian Approach to Uncovering Rat Hippocampal Population Codes During Spatial Navigation
Rodent hippocampal population codes represent important spatial information
about the environment during navigation. Several computational methods have
been developed to uncover the neural representation of spatial topology
embedded in rodent hippocampal ensemble spike activity. Here we extend our
previous work and propose a nonparametric Bayesian approach to infer rat
hippocampal population codes during spatial navigation. To tackle the model
selection problem, we leverage a nonparametric Bayesian model. Specifically, to
analyze rat hippocampal ensemble spiking activity, we apply a hierarchical
Dirichlet process-hidden Markov model (HDP-HMM) using two Bayesian inference
methods, one based on Markov chain Monte Carlo (MCMC) and the other based on
variational Bayes (VB). We demonstrate the effectiveness of our Bayesian
approaches on recordings from a freely-behaving rat navigating in an open field
environment. We find that MCMC-based inference with Hamiltonian Monte Carlo
(HMC) hyperparameter sampling is flexible and efficient, and outperforms VB and
MCMC approaches with hyperparameters set by empirical Bayes
Reparameterizing the Birkhoff Polytope for Variational Permutation Inference
Many matching, tracking, sorting, and ranking problems require probabilistic
reasoning about possible permutations, a set that grows factorially with
dimension. Combinatorial optimization algorithms may enable efficient point
estimation, but fully Bayesian inference poses a severe challenge in this
high-dimensional, discrete space. To surmount this challenge, we start with the
usual step of relaxing a discrete set (here, of permutation matrices) to its
convex hull, which here is the Birkhoff polytope: the set of all
doubly-stochastic matrices. We then introduce two novel transformations: first,
an invertible and differentiable stick-breaking procedure that maps
unconstrained space to the Birkhoff polytope; second, a map that rounds points
toward the vertices of the polytope. Both transformations include a temperature
parameter that, in the limit, concentrates the densities on permutation
matrices. We then exploit these transformations and reparameterization
gradients to introduce variational inference over permutation matrices, and we
demonstrate its utility in a series of experiments
- …
