13 research outputs found

    The visual perception experiment of [<b>21</b>] that demonstrates “explaining away” and its corresponding Bayesian network model.

    No full text
    <p><b>A</b>) Two visual stimuli, each exhibiting the same luminance profile in the horizontal direction, differ only with regard to their contours, which suggest different 3D shapes (flat versus cylindrical). This in turn influences our perception of the reflectance of the two halves of each stimulus (a step in the reflectance at the middle line, versus uniform reflectance): the cylindrical 3D shape “explains away”the reflectance step. <b>B</b>) The Bayesian network that models this effect represents the probability distribution . The relative reflectance () of the two halves is either different ( = 1) or the same ( = 0). The perceived 3D shape can be cylindrical ( = 1) or flat ( = 0). The relative reflectance and the 3D shape are direct causes of the shading (luminance change) of the surfaces (), which can have the profile like in panel A ( = 1) or a different one ( = 0). The 3D shape of the surfaces causes different perceived contours, flat ( = 0) or cylindrical ( = 1). The observed variables (evidence) are the contour () and the shading (). Subjects infer the marginal posterior probability distributions of the relative reflectance and the 3D shape based on the evidence. <b>C</b>) The RVs are represented in our neural implementations by principal neurons . Each spike of sets the RV to 1 for a time period of length . <b>D</b>) The structure of a network of spiking neurons that performs probabilistic inference for the Bayesian network of panel B through sampling from conditionals of the underlying distribution. Each principal neuron employs preprocessing to satisfy the NCC, either by dendritic processing or by a preprocessing circuit.</p

    Implementation 2 for the explaining away motif of the Bayesian network from <b>Fig. 1B</b>.

    No full text
    <p>Implementation 2 is the neural implementation with auxiliary neurons, that uses the Markov blanket expansion of the log-odd ratio. There are 4 auxiliary neurons, one for each possible value assignment to the RVs and in the Markov blanket of . The principal neuron () connects to the auxiliary neuron directly if () has value 1 in the assignment , or via an inhibitory inter-neuron if () has value 0 in . The auxiliary neurons connect with a strong excitatory connection to the principal neuron , and drive it to fire whenever any one of them fires. The larger gray circle represents the lateral inhibition between the auxiliary neurons.</p

    Values for the conditional probabilities in the ASIA Bayesian network used in Computer Simulation II.

    No full text
    <p>Values for the conditional probabilities in the ASIA Bayesian network used in Computer Simulation II.</p

    The randomly generated Bayesian network used in Computer Simulation III.

    No full text
    <p>It contains 20 nodes. Each node has up to 8 parents. We consider the generic but more difficult instance for probabilistic inference where evidence is entered for nodes in the lower part of the directed graph. The conditional probability tables were also randomly generated for all RVs.</p

    Spike raster of the spiking activity in one of the simulation trials described in <b>Fig. 7</b>.

    No full text
    <p>The spiking activity is from a simulation trial with the network of spiking neurons with alpha shaped EPSPs. The evidence was switched after 3 s (red vertical line) from to (by clamping the RV X to 1). In each block of rows the lowest spike train shows the activity of a principal neuron (see left hand side for the label of the associated RV), and the spike trains above show the firing activity of the associated auxiliary neurons. After the activity of the neurons for the x-ray test RV is not shown, since during this period the RV is clamped and the firing rate of its principal neuron is induced externally.</p

    Implementation 5 for the Bayesian network shown in <b>Fig. 1B</b>.

    No full text
    <p>Implementation 5 is the implementation with dendritic computation that is based on the factorized expansion of the log-odd ratio. RV occurs in two factors, and , and therefore receives synaptic inputs from and on separate groups of dendritic branches. Altogether the synaptic connections of this network of spiking neurons implement the graph structure of <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1002294#pcbi-1002294-g001" target="_blank">Fig. 1D</a>.</p

    Implementation 4 for the same explaining away motif as in <b>Fig. 2</b> and <b>4</b>.

    No full text
    <p>Implementation 4 is the neural implementation with auxiliary neurons and dendritic branches, that uses the factorized expansion of the log-odd ratio. As in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1002294#pcbi-1002294-g002" target="_blank">Fig. 2</a> there is one auxiliary neuron for each possible value assignment to and . The connections from the neurons and (that carry the current values of the RVs and ) to the auxiliary neurons are the same as in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1002294#pcbi-1002294-g002" target="_blank">Fig. 2</a>, and when these RVs change their value, the auxiliary neuron that corresponds to the new value fires. Each auxiliary neuron connects to the principal neuron at a separate dendritic branch , and there is an inhibitory neuron connecting to the same branch. The rest of the auxiliary neurons connect to the inhibitory interneuron . The function of the inhibitory neuron is to shunt the active EPSP caused by a recent spike from the auxiliary neuron when the value of the and changes from to another value.</p

    Results of Computer Simulation II.

    No full text
    <p>Probabilistic inference in the ASIA network with networks of spiking neurons that use different shapes of EPSPs. The simulated neural networks correspond to Implementation 2. The evidence is changed at from to (by clamping the x-ray test RV to 1). The probabilistic inference query is to estimate marginal posterior probabilities , , and . <b>A</b>) The ASIA Bayesian network. <b>B</b>) The three different shapes of EPSPs, an alpha shape (green curve), a smooth plateau shape (blue curve) and the optimal rectangular shape (red curve). <b>C</b>) and <b>D</b>) Estimated marginal probabilities for each of the diseases, calculated from the samples generated during the first 800 ms of the simulation with alpha shaped (green bars), plateau shaped (blue bars) and rectangular (red bars) EPSPs, compared with the corresponding correct marginal posterior probabilities (black bars), for in C) and in D). The results are averaged over 20 simulations with different random initial conditions. The error bars show the unbiased estimate of the standard deviation. <b>E</b>) and <b>F</b>) The sum of the Kullback-Leibler divergences between the correct and the estimated marginal posterior probability for each of the diseases using alpha shaped (green curve), plateau shaped (blue curve) and rectangular (red curve) EPSPs, for in E) and in F). The results are averaged over 20 simulation trials, and the light green and light blue areas show the unbiased estimate of the standard deviation for the green and blue curves respectively (the standard deviation for the red curve is not shown). The estimated marginal posteriors are calculated at each time point from the gathered samples from the beginning of the simulation (or from for the second inference query with ).</p

    Implementation 3 for the same explaining away motif as in <b>Fig. 2</b>.

    No full text
    <p>Implementation 3 is the neural implementation with dendritic computation that uses the Markov blanket expansion of the log-odd ratio. The principal neuron has 4 dendritic branches, one for each possible assignment of values to the RVs and in the Markov blanket of . The dendritic branches of neuron receive synaptic inputs from the principal neurons and either directly, or via an interneuron (analogously as in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1002294#pcbi-1002294-g002" target="_blank">Fig. 2</a>). It is required that at any moment in time exactly one of the dendritic branches (that one, whose index agrees with the current firing states of and ) generates dendritic spikes, whose amplitude at the soma determines the current firing probability of .</p

    Values for the conditional probabilities in the Bayesian network in Fig. 1B used in Computer Simulation I.

    No full text
    <p>Values for the conditional probabilities in the Bayesian network in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1002294#pcbi-1002294-g001" target="_blank">Fig. 1B</a> used in Computer Simulation I.</p
    corecore