8,389 research outputs found

    Measuring information-transfer delays

    Get PDF
    In complex networks such as gene networks, traffic systems or brain circuits it is important to understand how long it takes for the different parts of the network to effectively influence one another. In the brain, for example, axonal delays between brain areas can amount to several tens of milliseconds, adding an intrinsic component to any timing-based processing of information. Inferring neural interaction delays is thus needed to interpret the information transfer revealed by any analysis of directed interactions across brain structures. However, a robust estimation of interaction delays from neural activity faces several challenges if modeling assumptions on interaction mechanisms are wrong or cannot be made. Here, we propose a robust estimator for neuronal interaction delays rooted in an information-theoretic framework, which allows a model-free exploration of interactions. In particular, we extend transfer entropy to account for delayed source-target interactions, while crucially retaining the conditioning on the embedded target state at the immediately previous time step. We prove that this particular extension is indeed guaranteed to identify interaction delays between two coupled systems and is the only relevant option in keeping with Wiener’s principle of causality. We demonstrate the performance of our approach in detecting interaction delays on finite data by numerical simulations of stochastic and deterministic processes, as well as on local field potential recordings. We also show the ability of the extended transfer entropy to detect the presence of multiple delays, as well as feedback loops. While evaluated on neuroscience data, we expect the estimator to be useful in other fields dealing with network dynamics

    Self-organizing input space for control of structures

    Get PDF
    We propose a novel type of neural networks for structural control, which comprises an adaptive input space. This feature is purposefully designed for sequential input selection during adaptive identification and control of nonlinear systems, which allows the input space to be organized dynamically, while the excitation is occurring. The neural network has the main advantages of (1) automating the input selection process for time series that are not known a priori; (2) adapting the representation to nonstationarities; and (3) using limited observations. The algorithm designed for the adaptive input space assumes local quasi-stationarity of the time series, and embeds local maps sequentially in a delay vector using the embedding theorem. The input space of the representation, which in our case is a wavelet neural network, is subsequently updated. We demonstrate that the neural net has the potential to significantly improve convergence of a black-box model in adaptive tracking of a nonlinear system. Its performance is further assessed in a full-scale simulation of an existing civil structure subjected to nonstationary excitations (wind and earthquakes), and shows the superiority of the proposed method
    • …
    corecore