40 research outputs found

    Examples of highly nonlinear phenomena extracted from fMRI data (in systems with <i>M = 10</i> states, no external inputs).

    No full text
    A. PLRNN-BOLD-SSM with 3 stable limit cycles (LC) estimated from one subject (top: subspace of state space for 3 selected states; bottom: time graphs). B. PLRNN with 2 stable limit cycles and one chaotic attractor, estimated from another subject. C. PLRNN with one stable limit cycle and one stable fixed point. D. Increase in average (log Euclidean) distance between initially infinitesimally close trajectories with time for chaotic attractor in B. (In A and B states diverging towards–∞ were removed, as by virtue of the ReLU transformation they would not affect the other states and hence overall dynamics).</p

    Model evaluation on experimental data.

    No full text
    A. Association between KL divergence measures on observation (KLx) vs. latent space (KLz) for the Lorenz system; y-axis displayed in log-scale. B. Association between (Eq 11; in log scale) and correlation between generated and inferred state series for models with inputs (top, displayed in shades of blue for M = 1…10), and models without inputs (bottom, displayed in shades of red for M = 1…10). C. Distributions of (y-axis) in an experimental sample of n = 26 subjects for different latent state dimensions (x-axis), for models including (top) or excluding (bottom) external inputs. D. Mean squared error (MSE) between generated and true observations for the PLRNN-BOLD-SSM (squares) and the LDS-BOLD-SSM (triangles) as a function of ahead-prediction step for models including (left) or excluding (right) external inputs. The PLRNN-BOLD-SSM starts to robustly outperform the LDS-BOLD-SSM for predictions of observations more than about 3 time steps ahead, the latter in contrast to the former exhibiting a strongly nonlinear rise in prediction errors from that time step onward. The LDS-BOLD-SSM also does not seem to profit as much from increasing the latent state dimensionality. E. Same as D for the MSE between generated and inferred states as a function of ahead-prediction step, showing that the comparatively sharp rise in prediction errors for the LDS-BOLD-SSM in contrast to the PLRNN-BOLD-SSM is accompanied by a sharp increase in the discrepancy between generated and inferred state trajectories after the 3rd prediction step. Globally unstable system estimates were removed from D and E.</p

    Decoding task conditions from model trajectories.

    No full text
    A. Relative LDA classification error on different task phases based on the inferred states (top) and freely generated states (bottom) from the PLRNN-BOLD-SSM (solid lines) and LDS-BOLD-SSM (dashed lines), for models including (blue) or excluding (red) stimulus inputs. Black lines indicate classification results for random state permutations. Except for M = 2, the classification error for the PLRNN-BOLD-SSM based on generated states, drawn from the prior model pgen(Z), is significantly lower than for the permutation bootstraps (all p < .01), indicating that the prior dynamics contains task-related information. In contrast, the LDS-BOLD-SSM produced substantially higher discrimination errors for the generated trajectories (which were close to chance level when stimulus information was excluded), and even on the inferred trajectories. Globally unstable system estimates were removed from analysis. B. Typical example of inferred (left) and generated (right) state space trajectories from a PLRNN-BOLD-SSM, projected down to the first 3 principle components for visualization purposes, color-coded according to task phases (see legend). C. Same as in B for example from trained LDS-BOLD-SSM. The simulated (generated) states usually converged to a fixed point in this case.</p

    Example time series from an LDS-SSM and a PLRNN-SSM trained on the vdP system.

    No full text
    A. Example time graph (left) and state space (right) for a trajectory generated by an LDS-SSM (red) trained on the vdP system (true vdP trajectories in green). Trajectories from a LDS will almost inevitably decay toward a fixed point over time (or diverge). B. Trajectories generated by a trained PLRNN-SSM, in contrast, closely follow the vdP-system’s original limit cycle.</p

    Illustration of DS reconstruction measures defined in state space () vs. on the time series (mean squared error; MSE).

    No full text
    A. Two noise-free time series from the Lorenz equations started from slightly different initial conditions. Although initially the two time series (blue and yellow) stay closely together (low MSE), they then quickly diverge yielding a very large discrepancy in terms of the MSE, although truly they come from the very same system with the very same parameters. These problems will be aggravated once noise is added to the system and initial conditions are not tightly matched (as almost impossible for systems observed empirically), rendering any measure based on direct matching between time series a relatively poor choice for assessing dynamical systems reconstruction except for a couple of initial time steps. B. Example time series and state spaces from trained PLRNN-SSMs which capture the chaotic structure of the Lorenz attractor quite well (left) or produce rather a simple limit cycle but not chaos (right). The dynamical reconstruction quality is correctly indicated by (low on the left but high on the right), while the MSE between true (grey) and generated (orange) time series, on the contrary, would wrongly suggest that the right reconstruction (MSE = 1.4) is better than the one on the left (MSE = 2.48).</p

    Links between properties of system dynamics captured by the PLRNN-BOLD-SSM and behavioral task performance.

    No full text
    A. Average power spectra for PLRNN-generated time series when external inputs were excluded (left) and included (right), and for the original BOLD traces (yellow). M = 9 latent states were used in this analysis, as at this M the number of stable and unstable objects appeared to roughly plateau (S2A Fig). The left grey line marks the frequency of one entire task sequence cycle (3⋅72s = 216s = .0046Hz) and the right grey line the frequency of one task and resting block (36s+36s = 72s = .0139 Hz). The peaks in the power spectra of the model-generated time series at these points indicate that the PLRNN has captured the periodic reoccurrence of single task blocks as well as that of the whole task block sequence in its limit cycle activity. B. Relation of the number of stable and unstable dynamical objects (see Methods) to behavioral performance for models without external inputs (M = 9; see S2B Fig for data pooled across M = 2…10). Low and high performance groups were formed according to median splits over correct responses during the CMT. A repeated measures ANOVA with between-subject factor ‘performance’ (‘low’ vs. ‘high’ percentage of correct responses) and within-subject factor ‘stability’ (‘stable’ vs. ‘unstable’ objects) revealed a significant 2-way ‘performance x stability’ interaction (F(1,24) = 5.28, p = .031). We focused on the CMT for this analysis since for the other two tasks performance was close to a ceiling effect (although results still hold when averaging across tasks, p = .012).</p

    Evaluation of training protocol and KL measure on dynamical systems benchmarks.

    No full text
    A. True trajectory from chaotic Lorenz attractor (with parameters s = 10, r = 28, b = 8/3). B. Distribution of (Eq 9) across all samples, binned at .05, for PLRNN-SSM (black) and LDS-SSM (red). For the PLRNN-SSM, around 26% of these samples (grey shaded area, pooled across different numbers of latent states M) captured the butterfly structure of the Lorenz attractor well (see also D). Unsurprisingly, the LDS completely failed to reconstruct the Lorenz attractor. C. Estimated Lyapunov exponents for reconstructed Lorenz systems for PLRNN-SSM (black) and LDS-SSM (red) (estimated exponent for true Lorenz system ≈.9, cyan line). A significant positive correlation between the absolute deviation in Lyapunov exponents for true and reconstructed systems with (r = .27, p Eq 9) captures the dynamics (we note that measuring the correlation between power spectra comes with its own problems, however). For the LDS-SSM, in contrast, all power-spectra correlations and measures were poor. H. Same as in D for van der Pol system. Note that even reconstructed systems with high values may capture the limit cycle behavior and thus the basic topological structure of the underlying true system (in general, the 2-dimensional vdP system is likely easier to reconstruct than the chaotic Lorenz system; vice versa, low values do not ascertain that the reconstructed system exhibits the same frequencies).</p

    Analysis pipeline.

    No full text
    Top: Analysis pipeline for simulated data. From the two benchmark systems (van der Pol and Lorenz systems), noisy trajectories were drawn and handed over to the PLRNN-SSM inference algorithm. With the inferred model parameters, completely new trajectories were generated and compared to the state space distribution over true trajectories via the Kullback-Leibler divergence KLx (see Eq 9). Bottom: analysis pipeline for experimental data. We used preprocessed fMRI data from human subjects undergoing a classic working memory n-back paradigm. First, nuisance variables, in this case related to movement, were collected. Then, time series obtained from regions of interest (ROI) were extracted, standardized, and filtered (in agreement with the study design). From these preprocessed time series, we derived the first principle components and handed them to the inference algorithm (once including and once excluding variables indicating external stimulus presentations during the experiment). With the inferred parameters, the system was then run freely to produce new trajectories which were compared to the state space distribution from the inferred trajectories via the Kullback-Leibler divergence KLz (see Eq 11).</p

    Exemplary DS reconstruction in a sample subject.

    No full text
    A. Top: Latent trajectories generated by the prior model projected down to the first 3 principle components for visualization purposes in a model including external inputs and M = 6 latent states. Task separation is clearly visible in the generated state space (color-coded as in the legend), i.e. different cognitive demands are associated with different regions of state space (hard step-like changes in state are caused by the external inputs). Bottom: Observed time series (black) and their predictions based on the generated trajectories (red, with 90% CI in grey) for the same subject. See also S1 Video. B. Same as A for the same subject in a PLRNN without external inputs. *BA = Brodmann area, Le/Re = left/right, CRT = choice reaction task, CDRT = continuous delayed response task, CMT = continuous matching task.</p
    corecore