image

Performance of decoding-based estimators depends on the dimensionality of the response trajectories and on the number of response trajectory samples.

Abstract

Performance of various model-free decoding estimators (colored lines) for Examples 1 (A, D), 2 (B, E), 3 (C, F), respectively, compared to the MAP bound, IMAP (black line), as a function of input trajectory dimension, d (at fixed N = 1000) in A, B, C; or as a function of the number of samples, N, per input condition (at fixed d = 100) in D, E, F. Error bars are std over 20 replicate estimations. Decoding estimators: linear SVM, ISVM(lin) (orange); radial basis functions SVM, ISVM(rbf) (blue); the Gaussian decoder with diagonal regularization (see S2 Fig for the effects of covariance matrix regularization and signal filtering on Gaussian decoder estimates), IGD (yellow); multi-layer perceptron neural network, INN (green). Dashed vertical orange line marks the d ≤ 100 regime typical of current experiments. Note that while the amount of information must in principle increase monotonically with d, the amount that decoders can actually extract given a limited number of samples, N, has no such guarantee. The decrease, at high d, in Gaussian decoder information estimate in A, B, C and neural network information estimate in C, happens because the number of parameters of the decoder grows with d albeit at fixed number of samples, leading to overfitting that regularization cannot fully compensate, and thus to the consequent loss of performance on the test data.</p

    Similar works

    Full text

    thumbnail-image