201,213 research outputs found
The Computational Cost of Asynchronous Neural Communication
Biological neural computation is inherently asynchronous due to large variations in neuronal spike timing and transmission delays. So-far, most theoretical work on neural networks assumes the synchronous setting where neurons fire simultaneously in discrete rounds. In this work we aim at understanding the barriers of asynchronous neural computation from an algorithmic perspective. We consider an extension of the widely studied model of synchronized spiking neurons [Maass, Neural Networks 97] to the asynchronous setting by taking into account edge and node delays.
- Edge Delays: We define an asynchronous model for spiking neurons in which the latency values (i.e., transmission delays) of non self-loop edges vary adversarially over time. This extends the recent work of [Hitron and Parter, ESA\u2719] in which the latency values are restricted to be fixed over time. Our first contribution is an impossibility result that implies that the assumption that self-loop edges have no delays (as assumed in Hitron and Parter) is indeed necessary. Interestingly, in real biological networks self-loop edges (a.k.a. autapse) are indeed free of delays, and the latter has been noted by neuroscientists to be crucial for network synchronization.
To capture the computational challenges in this setting, we first consider the implementation of a single NOT gate. This simple function already captures the fundamental difficulties in the asynchronous setting. Our key technical results are space and time upper and lower bounds for the NOT function, our time bounds are tight. In the spirit of the distributed synchronizers [Awerbuch and Peleg, FOCS\u2790] and following [Hitron and Parter, ESA\u2719], we then provide a general synchronizer machinery. Our construction is very modular and it is based on efficient circuit implementation of threshold gates. The complexity of our scheme is measured by the overhead in the number of neurons and the computation time, both are shown to be polynomial in the largest latency value, and the largest incoming degree ? of the original network.
- Node Delays: We introduce the study of asynchronous communication due to variations in the response rates of the neurons in the network. In real brain networks, the round duration varies between different neurons in the network. Our key result is a simulation methodology that allows one to transform the above mentioned synchronized solution under edge delays into a synchronized under node delays while incurring a small overhead w.r.t space and time
Importance Sampling: Intrinsic Dimension and Computational Cost
The basic idea of importance sampling is to use independent samples from a
proposal measure in order to approximate expectations with respect to a target
measure. It is key to understand how many samples are required in order to
guarantee accurate approximations. Intuitively, some notion of distance between
the target and the proposal should determine the computational cost of the
method. A major challenge is to quantify this distance in terms of parameters
or statistics that are pertinent for the practitioner. The subject has
attracted substantial interest from within a variety of communities. The
objective of this paper is to overview and unify the resulting literature by
creating an overarching framework. A general theory is presented, with a focus
on the use of importance sampling in Bayesian inverse problems and filtering.Comment: Statistical Scienc
Automatic Environmental Sound Recognition: Performance versus Computational Cost
In the context of the Internet of Things (IoT), sound sensing applications
are required to run on embedded platforms where notions of product pricing and
form factor impose hard constraints on the available computing power. Whereas
Automatic Environmental Sound Recognition (AESR) algorithms are most often
developed with limited consideration for computational cost, this article seeks
which AESR algorithm can make the most of a limited amount of computing power
by comparing the sound classification performance em as a function of its
computational cost. Results suggest that Deep Neural Networks yield the best
ratio of sound classification accuracy across a range of computational costs,
while Gaussian Mixture Models offer a reasonable accuracy at a consistently
small cost, and Support Vector Machines stand between both in terms of
compromise between accuracy and computational cost
Optimizing the remeshing procedure by computational cost estimation of adaptive fem technique
The objective of adaptive techniques is to obtain a mesh which is optimal in the sense that the computational costs involved are minimal under the constraint that the error in the finite element solution is acceptable within a certain limit. But adaptive FEM procedure imposes extra computational cost to the solution. If we repeat the adaptive process without any limit, it will reduce efficiency of remeshing procedure. Sometimes it is better to take an initial very fine mesh instead of multilevel mesh refinement. So it is needed to estimate the computational cost of adaptive finite element technique and compare it with the FEM computational cost. The remeshing procedure can be optimized by balancing these computational costs
Observation of large-scale multi-agent based simulations
The computational cost of large-scale multi-agent based simulations (MABS)
can be extremely important, especially if simulations have to be monitored for
validation purposes. In this paper, two methods, based on self-observation and
statistical survey theory, are introduced in order to optimize the computation
of observations in MABS. An empirical comparison of the computational cost of
these methods is performed on a toy problem
Optimal detection of changepoints with a linear computational cost
We consider the problem of detecting multiple changepoints in large data
sets. Our focus is on applications where the number of changepoints will
increase as we collect more data: for example in genetics as we analyse larger
regions of the genome, or in finance as we observe time-series over longer
periods. We consider the common approach of detecting changepoints through
minimising a cost function over possible numbers and locations of changepoints.
This includes several established procedures for detecting changing points,
such as penalised likelihood and minimum description length. We introduce a new
method for finding the minimum of such cost functions and hence the optimal
number and location of changepoints that has a computational cost which, under
mild conditions, is linear in the number of observations. This compares
favourably with existing methods for the same problem whose computational cost
can be quadratic or even cubic. In simulation studies we show that our new
method can be orders of magnitude faster than these alternative exact methods.
We also compare with the Binary Segmentation algorithm for identifying
changepoints, showing that the exactness of our approach can lead to
substantial improvements in the accuracy of the inferred segmentation of the
data.Comment: 25 pages, 4 figures, To appear in Journal of the American Statistical
Associatio
- …