5 research outputs found
Overview over datasets with training and test data used in the competition.
<p>Overview over datasets with training and test data used in the competition.</p
Different spike inference metrics reach similar conclusions.
<p><b>A.</b> Area under the curve (AUC) of the inferred spike rate used as a binary predictor for the presence of spikes (evaluated at 25 Hz, 50 ms bins) on the test set. Colors indicate different datasets. Black dots are mean correlation coefficients across all <i>N</i> = 32 cells in the test set. Colored dots are jittered for better visibility. STM: Spike-triggered mixture model [<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1006157#pcbi.1006157.ref015" target="_blank">15</a>]; f-oopsi: fast non-negative deconvolution [<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1006157#pcbi.1006157.ref009" target="_blank">9</a>] <b>B.</b> Information gain of the inferred spike rate about the true spike rate on the test set (evaluated at 25 Hz, 40 ms bins).</p
Top algorithms make highly correlated predictions.
<p><b>A.-B.</b> Example cells from the test set for dataset 1 (OGB-1) and dataset 3 (GCaMP6s) show highly similar predictions between most algorithms. <b>C.</b> Average correlation coefficients between predictions of different algorithms across all cells in the test set at 25 Hz (40 ms bins).</p
Summary of algorithm performance.
<p>螖 correlation is computed as the mean difference in correlation coefficient compared to the STM algorithm. 螖 var. exp. in % is computed as the mean relative improvement variance explained (<i>r</i><sup>2</sup>). Note that since variance explained is a nonlinear function of correlation, algorithms can be ranked differently according to the two measures. All means are taken over <i>N</i> = 32 recordings in the test set, except for training correlation, which is computed over <i>N</i> = 60 recordings in the training set.</p
Overview over different strategies used by DNN-based algorithms.
<p>Architecture briefly summarizes main components. conv: convolutional layers, typically with non-linearity; lstm: recurrent long-short-term memory unit; residual: residual blocks; max: max-pooling layers; inception: inception cells. For details, refer to the descriptions of the algorithms in the supplementary material.</p