13,076 research outputs found

    Network Tomography: Identifiability and Fourier Domain Estimation

    Full text link
    The statistical problem for network tomography is to infer the distribution of X\mathbf{X}, with mutually independent components, from a measurement model Y=AX\mathbf{Y}=A\mathbf{X}, where AA is a given binary matrix representing the routing topology of a network under consideration. The challenge is that the dimension of X\mathbf{X} is much larger than that of Y\mathbf{Y} and thus the problem is often called ill-posed. This paper studies some statistical aspects of network tomography. We first address the identifiability issue and prove that the X\mathbf{X} distribution is identifiable up to a shift parameter under mild conditions. We then use a mixture model of characteristic functions to derive a fast algorithm for estimating the distribution of X\mathbf{X} based on the General method of Moments. Through extensive model simulation and real Internet trace driven simulation, the proposed approach is shown to be favorable comparing to previous methods using simple discretization for inferring link delays in a heterogeneous network.Comment: 21 page

    Inferring Synaptic Structure in presence of Neural Interaction Time Scales

    Get PDF
    Biological networks display a variety of activity patterns reflecting a web of interactions that is complex both in space and time. Yet inference methods have mainly focused on reconstructing, from the network's activity, the spatial structure, by assuming equilibrium conditions or, more recently, a probabilistic dynamics with a single arbitrary time-step. Here we show that, under this latter assumption, the inference procedure fails to reconstruct the synaptic matrix of a network of integrate-and-fire neurons when the chosen time scale of interaction does not closely match the synaptic delay or when no single time scale for the interaction can be identified; such failure, moreover, exposes a distinctive bias of the inference method that can lead to infer as inhibitory the excitatory synapses with interaction time scales longer than the model's time-step. We therefore introduce a new two-step method, that first infers through cross-correlation profiles the delay-structure of the network and then reconstructs the synaptic matrix, and successfully test it on networks with different topologies and in different activity regimes. Although step one is able to accurately recover the delay-structure of the network, thus getting rid of any \textit{a priori} guess about the time scales of the interaction, the inference method introduces nonetheless an arbitrary time scale, the time-bin dtdt used to binarize the spike trains. We therefore analytically and numerically study how the choice of dtdt affects the inference in our network model, finding that the relationship between the inferred couplings and the real synaptic efficacies, albeit being quadratic in both cases, depends critically on dtdt for the excitatory synapses only, whilst being basically independent of it for the inhibitory ones

    Research on Network Tomography Measurement Technique

    Get PDF

    Model-free reconstruction of neuronal network connectivity from calcium imaging signals

    Get PDF
    A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically unfeasible even in dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct approximations to network structural connectivities from network activity monitored through calcium fluorescence imaging. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time-series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the effective network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (e.g., bursting or non-bursting). We thus demonstrate how conditioning with respect to the global mean activity improves the performance of our method. [...] Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good reconstruction of the network clustering coefficient, allowing to discriminate between weakly or strongly clustered topologies, whereas on the other hand an approach based on cross-correlations would invariantly detect artificially high levels of clustering. Finally, we present the applicability of our method to real recordings of in vitro cortical cultures. We demonstrate that these networks are characterized by an elevated level of clustering compared to a random graph (although not extreme) and by a markedly non-local connectivity.Comment: 54 pages, 8 figures (+9 supplementary figures), 1 table; submitted for publicatio

    Training deep neural density estimators to identify mechanistic models of neural dynamics

    Get PDF
    Mechanistic modeling in neuroscience aims to explain observed phenomena in terms of underlying causes. However, determining which model parameters agree with complex and stochastic neural data presents a significant challenge. We address this challenge with a machine learning tool which uses deep neural density estimators-- trained using model simulations-- to carry out Bayesian inference and retrieve the full space of parameters compatible with raw data or selected data features. Our method is scalable in parameters and data features, and can rapidly analyze new data after initial training. We demonstrate the power and flexibility of our approach on receptive fields, ion channels, and Hodgkin-Huxley models. We also characterize the space of circuit configurations giving rise to rhythmic activity in the crustacean stomatogastric ganglion, and use these results to derive hypotheses for underlying compensation mechanisms. Our approach will help close the gap between data-driven and theory-driven models of neural dynamics

    A role for recurrent processing in object completion: neurophysiological, psychophysical and computational"evidence

    Get PDF
    Recognition of objects from partial information presents a significant challenge for theories of vision because it requires spatial integration and extrapolation from prior knowledge. We combined neurophysiological recordings in human cortex with psychophysical measurements and computational modeling to investigate the mechanisms involved in object completion. We recorded intracranial field potentials from 1,699 electrodes in 18 epilepsy patients to measure the timing and selectivity of responses along human visual cortex to whole and partial objects. Responses along the ventral visual stream remained selective despite showing only 9-25% of the object. However, these visually selective signals emerged ~100 ms later for partial versus whole objects. The processing delays were particularly pronounced in higher visual areas within the ventral stream, suggesting the involvement of additional recurrent processing. In separate psychophysics experiments, disrupting this recurrent computation with a backward mask at ~75ms significantly impaired recognition of partial, but not whole, objects. Additionally, computational modeling shows that the performance of a purely bottom-up architecture is impaired by heavy occlusion and that this effect can be partially rescued via the incorporation of top-down connections. These results provide spatiotemporal constraints on theories of object recognition that involve recurrent processing to recognize objects from partial information
    • …
    corecore