799 research outputs found
Active Bayesian Optimization: Minimizing Minimizer Entropy
The ultimate goal of optimization is to find the minimizer of a target
function.However, typical criteria for active optimization often ignore the
uncertainty about the minimizer. We propose a novel criterion for global
optimization and an associated sequential active learning strategy using
Gaussian processes.Our criterion is the reduction of uncertainty in the
posterior distribution of the function minimizer. It can also flexibly
incorporate multiple global minimizers. We implement a tractable approximation
of the criterion and demonstrate that it obtains the global minimizer
accurately compared to conventional Bayesian optimization criteria
Bayesian Entropy Estimation for Countable Discrete Distributions
We consider the problem of estimating Shannon's entropy from discrete
data, in cases where the number of possible symbols is unknown or even
countably infinite. The Pitman-Yor process, a generalization of Dirichlet
process, provides a tractable prior distribution over the space of countably
infinite discrete distributions, and has found major applications in Bayesian
non-parametric statistics and machine learning. Here we show that it also
provides a natural family of priors for Bayesian entropy estimation, due to the
fact that moments of the induced posterior distribution over can be
computed analytically. We derive formulas for the posterior mean (Bayes' least
squares estimate) and variance under Dirichlet and Pitman-Yor process priors.
Moreover, we show that a fixed Dirichlet or Pitman-Yor process prior implies a
narrow prior distribution over , meaning the prior strongly determines the
entropy estimate in the under-sampled regime. We derive a family of continuous
mixing measures such that the resulting mixture of Pitman-Yor processes
produces an approximately flat prior over . We show that the resulting
Pitman-Yor Mixture (PYM) entropy estimator is consistent for a large class of
distributions. We explore the theoretical properties of the resulting
estimator, and show that it performs well both in simulation and in application
to real data.Comment: 38 pages LaTeX. Revised and resubmitted to JML
Bayesian entropy estimators for spike trains
Il Memming Park and Jonathan Pillow are with the Institute for Neuroscience and Department of Psychology, The University of Texas at Austin, TX 78712, USA -- Evan Archer is with the Institute for Computational and Engineering Sciences, The University of Texas at Austin, TX 78712, USA -- Jonathan Pillow is with the Division of Statistics and Scientific Computation, The University of Texas at Austin, Austin, TX 78712, USAPoster presentation:
Information theoretic quantities have played a central role in neuroscience for quantifying neural codes [1]. Entropy and mutual information can be used to measure the maximum encoding capacity of a neuron, quantify the amount of noise, spatial and temporal functional dependence, learning process, and provide a fundamental limit for neural coding. Unfortunately, estimating entropy or mutual information is notoriously difficult--especially when the number of observations N is less than the number of possible symbols K [2]. For the neural spike trains, this is often the case due to the combinatorial nature of the symbols: for n simultaneously recorded neurons on m time bins, the number of possible symbols is K = 2n+m. Therefore, the question is how to extrapolate when you may have a severely under-sampled distribution.
Here we describe a couple of recent advances in Bayesian entropy estimation for spike trains. Our approach follows that of Nemenman et al. [2], who formulated a Bayesian entropy estimator using a mixture-of-Dirichlet prior over the space of discrete distributions on K bins. We extend this approach to formulate two Bayesian estimators with different strategies to deal with severe under-sampling.
For the first estimator, we design a novel mixture prior over countable distributions using the Pitman-Yor (PY) process [3]. The PY process is useful when the number of parameters is unknown a priori, and as a result finds many applications in Bayesian nonparametrics. PY process can model the heavy, power-law distributed tails which often occur in neural data. To reduce the bias of the estimator we analytically derive a set of mixing weights so that the resulting improper prior over entropy is approximately flat. We consider the posterior over entropy given a dataset (which contains some observed number of words but an unknown number of unobserved words), and show that the posterior mean can be efficiently computed via a simple numerical integral.
The second estimator incorporates the prior knowledge about the spike trains. We use a simple Bernoulli process as a parametric model of the spike trains, and use a Dirichlet process to allow arbitrary deviation from the Bernoulli process. Under this model, very sparse spike trains are a priori orders of magnitude more likely than those with many spikes. Both estimators are computationally efficient, and statistically consistent. We applied those estimators to spike trains from early visual system to quantify neural coding [email protected]
Water splitting with polyoxometalate-treated photoanodes: Enhancing performance through sensitizer design
Visible light driven water oxidation has been demonstrated at near-neutral pH using photoanodes based on nanoporous films of TiO2, polyoxometalate (POM) water oxidation catalyst [{Ru4O4(OH)2(H2O)4}(γ-SiW10O36)2]10- (1), and both known photosensitizer [Ru(bpy)2(H4dpbpy)]2+ (P2) and the novel crown ether functionalized dye [Ru(5-crownphen)2(H2dpbpy)] (H22). Both triads, containing catalyst 1, and catalyst-free dyads, produce O2 with high faradaic efficiencies (80 to 94%), but presence of catalyst enhances quantum yield by up to 190% (maximum 0.39%). New sensitizer H22 absorbs light more strongly than P2, and increases O2 quantum yields by up to 270%. TiO2-2 based photoelectrodes are also more stable to desorption of active species than TiO2-P2: losses of catalyst 1 are halved when pH > TiO2 point-of-zero charge (pzc), and losses of sensitizer reduced below the pzc (no catalyst is lost when pH < pzc). For the triads, quantum yields of O2 are higher at pH 5.8 than at pH 7.2, opposing the trend observed for 1 under homogeneous conditions. This is ascribed to lower stability of the dye oxidized states at higher pH, and less efficient electron transfer to TiO2, and is also consistent with the 4th 1-to-dye electron transfer limiting performance rather than catalyst TOFmax. Transient absorption reveals that TiO2-2-1 has similar 1st electron transfer dynamics to TiO2-P2-1, with rapid (ps timescale) formation of long-lived TiO2(e-)-2-1(h+) charge separated states, and demonstrates that metallation of the crown ether groups (Na+/Mg2+) has little or no effect on electron transfer from 1 to 2. The most widely relevant findings of this study are therefore: (i) increased dye extinction coefficients and binding stability significantly improve performance in dye-sensitized water splitting systems; (ii) binding of POMs to electrode surfaces can be stabilized through use of recognition groups; (iii) the optimal homogeneous and TiO2-bound operating pHs of a catalyst may not be the same; and (iv) dye-sensitized TiO2 can oxidize water without a catalyst
- …
