11 research outputs found

    Role of the fast synaptic dynamics: depending on the speed of the synaptic dynamics defined by , the locomotion properties are changing drastically.

    No full text
    <p>Depicted is the distance traveled by the robot in 10 min simulated time on an empty plane. The inset gives a close up view for low , demonstrating that the locomotion starts only if exceeds a certain threshold value. Shown is the mean and standard deviation of 10 runs each. Update frequency 25 Hz.</p

    Parameter similarity for the behavior in different environments (Fig.

    No full text
    <p><b>9).</b> Plotted is the results of a hierarchical clustering based on the difference between the parameters in each of the simulations (averaged over time). For each of the four environments there are three initial poses: (straight upright), and slanted to the front. The parameters for runs in the same environment are clustered together. This supports the observation that the embodiment plays an essential role in the generation of behavior. More importantly the physical conditions are reflected in the parameters and are thus internalized. We used the squared norm of the difference of the absolute values of the matrix elements. The absolute values were used because a common structure in the parameters are rotation matrices and there the same qualitative behavior is obtained with inverted signs. Parameters: , , update frequency 50 Hz.</p

    The probability density distributions with different time windows of the stochastic process in an asymmetric double well potential.

    No full text
    <p>The mean first passage time of switching between wells is one characteristic time constant of the process <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0063400#pone.0063400-Risken1" target="_blank">[53]</a>, increasing exponentially with the barrier height. If observing the process in a window of length , the distribution of (<b>A</b>) will be observed. In that situation, the TiPI is maximal if the wells are of equal depth (). However, with windows of length , the system state will be predominantly in one of the wells generating the distributions shown in (<b>B</b>), (<b>C</b>). Gradient ascending the TiPI will decrease the well depth as long as the probability mass is still concentrated in that well. This is what drives the hysteresis cycle depicted in Fig. 2.</p

    The time window and the error propagation dynamics used for calculating the TiPI, eq. (11).

    No full text
    <p>In principle, the process is considered many times with always the same starting value but different realizations of the noise . Note that, when using the one-shot gradients, only one realization is needed.</p

    The Armband.

    No full text
    <p>The robot has 12 hinge and 6 slider joints, each actuated by a servo motor and equipped with a proprioceptive sensor measuring the joint angle or slider length. The robot is strongly underactuated so that it can not take on a wheel like form where locomotion were trivial.</p

    The Humanoid robot in four different scenarios.

    No full text
    <p>(<b>A</b>) Normal environment with flat ground. (<b>B</b>) The robot is hanging at a bungee like spring. (<b>C</b>) The robot is attached to a high bar. (<b>D</b>) Robot is fallen into a narrow pit.</p

    Dimensionality of behavior on different time scales.

    No full text
    <p>Humanoid robot in bungee setup running 40 min with different control settings. The sensor data is partitioned into chunks of a fixed length, the graph depicting the effective dimension over the length of the chunks for different settings. In order to test the method we start with a uniformly distributed noise signal for motor commands (“noise signal”). As expected the observed dimension is maximal. The sensor values produced by that random controller show a lower dimension (“noise ctrl.”) as is expected due to the low pass filtering property of the mechanical system. All other cases are with the TiPI maximization controller with different update rates . In particular, the comparison with the case demonstrates that the exploration dynamics produces more complex behaviors than any fixed controller.</p

    Spike-history dependencies affect decoding performance.

    No full text
    <p><b>A</b>: Shuffles of responses to repeated stimulus presentations remove different types of correlations, but preserve average locking to the stimulus (PSTH), and thus stimulus-induced correlations. <b>B</b>: A repeated stimulus fragment (red trace), nonlinear kernelized decoder predictions using real responses (green), and using responses without different types of correlations (gray); shown is the prediction mean ± SD over repeats. <b>C</b>: Increase in decoding error (MSE) when spike-history dependencies or noise correlations are removed (average ± SEM across sites); percentages report fractional differences relative to the original performance. <b>D</b>: Spike count distributions for a single example cell. Removing spike-history dependencies broadens the distributions, in particular in constant epochs. Dashed line = expectation for a fully randomized spike train with a matched firing rate. <b>E</b>: Variance-to-mean ratio <i>F</i> of spike count distributions for spike trains with and without spike-history dependencies. Each point is a cell that contributes most to decoding at a particular site (when the same cell contributes to multiple sites, average ± SD across sites is shown).</p

    Nonlinear decoding outperforms linear decoding.

    No full text
    <p><b>A</b>: Luminance trace (red) with linear (blue) and nonlinear KRR (green) and neural network (grey) predictions. <b>B</b>: Average decoder performance (± SD across sites), achievable using increasing numbers of cells with highest L1 filter norm. For nonlinear decoding, “All” is the optimal subset that maximizes performance (<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1006057#pcbi.1006057.s007" target="_blank">S7 Fig</a>). Since the neural network (grey point with an error bar) simultaneously decodes the movie at all sites, it only makes sense to train it using “All” cells. <b>C</b>: Average ROC across all testing movie frames. <b>D</b>: Fractional improvement (average ± SEM across sites) of nonlinear KRR versus linear decoders for test stimuli with different numbers of discs. All decoders were trained only on the 10-disc stimulus. <b>E</b>: Decoding error (MSE; average ± SEM across sites) in fluctuating and constant epochs is significantly larger for linear decoders (p<0.001) relative to nonlinear KRR and the neural network.</p

    Linear decoding of a complex movie.

    No full text
    <p><b>A</b>: An example stimulus frame. At each site (red dots = partially shown 20×20 grid) the stimulus was convolved with a spatial gaussian filter (red circle = 1<i>σ</i>). Typical RGC receptive field center size shown in gray. <b>B</b>: Responses of 91 RGCs with 750 <i>ms</i> decoding window overlaid in blue. <b>C</b>: Three example luminance traces (red) and the linear decoders’ predictions (blue). <b>D</b>: Decoded frame (same as in <b>A</b>) reconstructed from 20×20 separately decoded traces. Disc contours of the original frame shown for reference in green. <b>E</b>: RF centers of the 91 cells (black dots = centers of fitted ellipses). RF centers overlapping a chosen site (red dot) are highlighted in blue. <b>F</b>: Performance of the linear decoders across space, as Fraction of Variance Explained (FVE). Black dots as in <b>E</b>; black contour is the boundary <i>FVE</i> = 0.4. <b>G</b>: Performance of the linear decoders (FVE) across sites as a function of cell coverage (grayscale = conditional histograms, red dots = means, error bars = ± SD). <b>H</b>: Average decoding error across sites (MSE ± SD) of 10-disc-trained decoders, tested on withheld stimuli with different numbers of discs. <b>I</b>: Cells (black dots = RF center positions) contributing to the decoding at two example sites (red circles); decoding filters shown below. For each site, contributing cells (highlighted in red and joined to the site) account for at least half of the total L1 norm. <b>J</b>: Decoding field of a single cell (here, evaluated over a denser 50×50 grid and normalized to unit maximal variance); the cell’s RF center shown in black.</p
    corecore