223,234 research outputs found
The Army of One (Sample): the Characteristics of Sampling-based Probabilistic Neural Representations
There is growing evidence that humans and animals represent the uncertainty associated with sensory stimuli and utilize this uncertainty during planning and decision making in a statistically optimal way. Recently, a nonparametric framework for representing probabilistic information has been proposed whereby neural activity encodes samples from the distribution over external variables. Although such sample-based probabilistic representations have strong empirical and theoretical support, two major issues need to be clarified before they can be considered as viable candidate theories of cortical computation. First, in a fluctuating natural environment, can neural dynamics provide sufficient samples to accurately estimate a stimulus? Second, can such a code support accurate learning over biologically plausible time-scales? Although it is well known that sampling is statistically optimal if the number of samples is unlimited, biological constraints mean that estimation and learning in the cortex must be supported by a relatively small number of possibly dependent samples. We explored these issues in a cue combination task by comparing a neural circuit that employed a sampling-based representation to an optimal estimator. For static stimuli, we found that a single sample is sufficient to obtain an estimator with less than twice the optimal variance, and that performance improves with the inverse square root of the number of samples. For dynamic stimuli, with linear-Gaussian evolution, we found that the efficiency of the estimation improves significantly as temporal information stabilizes the estimate, and because sampling does not require a burn-in phase. Finally, we found that using a single sample, the dynamic model can accurately learn the parameters of the input neural populations up to a general scaling factor, which disappears for modest sample size. These results suggest that sample-based representations can support estimation and learning using a relatively small number of samples and are therefore highly feasible alternatives for performing probabilistic cortical computations.

Computationally Efficient Nonparametric Importance Sampling
The variance reduction established by importance sampling strongly depends on
the choice of the importance sampling distribution. A good choice is often hard
to achieve especially for high-dimensional integration problems. Nonparametric
estimation of the optimal importance sampling distribution (known as
nonparametric importance sampling) is a reasonable alternative to parametric
approaches.In this article nonparametric variants of both the self-normalized
and the unnormalized importance sampling estimator are proposed and
investigated. A common critique on nonparametric importance sampling is the
increased computational burden compared to parametric methods. We solve this
problem to a large degree by utilizing the linear blend frequency polygon
estimator instead of a kernel estimator. Mean square error convergence
properties are investigated leading to recommendations for the efficient
application of nonparametric importance sampling. Particularly, we show that
nonparametric importance sampling asymptotically attains optimal importance
sampling variance. The efficiency of nonparametric importance sampling
algorithms heavily relies on the computational efficiency of the employed
nonparametric estimator. The linear blend frequency polygon outperforms kernel
estimators in terms of certain criteria such as efficient sampling and
evaluation. Furthermore, it is compatible with the inversion method for sample
generation. This allows to combine our algorithms with other variance reduction
techniques such as stratified sampling. Empirical evidence for the usefulness
of the suggested algorithms is obtained by means of three benchmark integration
problems. As an application we estimate the distribution of the queue length of
a spam filter queueing system based on real data.Comment: 29 pages, 7 figure
Event-Based State Estimation Using an Improved Stochastic Send-on-Delta Sampling Scheme
Event-based sensing and communication holds the promise of lower resource utilization and/or better performance for remote state estimation applications found in e.g. networked control systems. Recently, stochastic event-triggering rules have been proposed as a means to avoid the complexity of the problem that normally arises in event-based estimator design. By using a scaled Gaussian function in the stochastic triggering scheme, the optimal remote state estimator becomes a linear Kalman filter with a case dependent measurement update. In this paper we propose a modified version of the stochastic send-on-delta triggering rule. The idea is to use a very simple predictor in the sensor, which allows the communication rate to be reduced while preserving estimation performance compared to regular stochastic send-on-delta sampling. We derive the optimal mean-square error estimator for the new scheme and present upper and lower bounds on the error covariance. The proposed scheme is evaluated in numerical examples, where it compares favorably to previous stochastic sampling approaches, and is shown to preserve estimation performance well even at large reductions in communication rate
Periodic Splines and Gaussian Processes for the Resolution of Linear Inverse Problems
This paper deals with the resolution of inverse problems in a periodic
setting or, in other terms, the reconstruction of periodic continuous-domain
signals from their noisy measurements. We focus on two reconstruction
paradigms: variational and statistical. In the variational approach, the
reconstructed signal is solution to an optimization problem that establishes a
tradeoff between fidelity to the data and smoothness conditions via a quadratic
regularization associated to a linear operator. In the statistical approach,
the signal is modeled as a stationary random process defined from a Gaussian
white noise and a whitening operator; one then looks for the optimal estimator
in the mean-square sense. We give a generic form of the reconstructed signals
for both approaches, allowing for a rigorous comparison of the two.We fully
characterize the conditions under which the two formulations yield the same
solution, which is a periodic spline in the case of sampling measurements. We
also show that this equivalence between the two approaches remains valid on
simulations for a broad class of problems. This extends the practical range of
applicability of the variational method
- …