45 research outputs found

    The visual perception experiment of [<b>21</b>] that demonstrates “explaining away” and its corresponding Bayesian network model.

    No full text
    <p><b>A</b>) Two visual stimuli, each exhibiting the same luminance profile in the horizontal direction, differ only with regard to their contours, which suggest different 3D shapes (flat versus cylindrical). This in turn influences our perception of the reflectance of the two halves of each stimulus (a step in the reflectance at the middle line, versus uniform reflectance): the cylindrical 3D shape “explains away”the reflectance step. <b>B</b>) The Bayesian network that models this effect represents the probability distribution . The relative reflectance () of the two halves is either different ( = 1) or the same ( = 0). The perceived 3D shape can be cylindrical ( = 1) or flat ( = 0). The relative reflectance and the 3D shape are direct causes of the shading (luminance change) of the surfaces (), which can have the profile like in panel A ( = 1) or a different one ( = 0). The 3D shape of the surfaces causes different perceived contours, flat ( = 0) or cylindrical ( = 1). The observed variables (evidence) are the contour () and the shading (). Subjects infer the marginal posterior probability distributions of the relative reflectance and the 3D shape based on the evidence. <b>C</b>) The RVs are represented in our neural implementations by principal neurons . Each spike of sets the RV to 1 for a time period of length . <b>D</b>) The structure of a network of spiking neurons that performs probabilistic inference for the Bayesian network of panel B through sampling from conditionals of the underlying distribution. Each principal neuron employs preprocessing to satisfy the NCC, either by dendritic processing or by a preprocessing circuit.</p

    Values for the conditional probabilities and in the ASIA Bayesian network used in Computer Simulation II.

    No full text
    <p>Values for the conditional probabilities and in the ASIA Bayesian network used in Computer Simulation II.</p

    The randomly generated Bayesian network used in Computer Simulation III.

    No full text
    <p>It contains 20 nodes. Each node has up to 8 parents. We consider the generic but more difficult instance for probabilistic inference where evidence is entered for nodes in the lower part of the directed graph. The conditional probability tables were also randomly generated for all RVs.</p

    Implementation 2 for the explaining away motif of the Bayesian network from <b>Fig. 1B</b>.

    No full text
    <p>Implementation 2 is the neural implementation with auxiliary neurons, that uses the Markov blanket expansion of the log-odd ratio. There are 4 auxiliary neurons, one for each possible value assignment to the RVs and in the Markov blanket of . The principal neuron () connects to the auxiliary neuron directly if () has value 1 in the assignment , or via an inhibitory inter-neuron if () has value 0 in . The auxiliary neurons connect with a strong excitatory connection to the principal neuron , and drive it to fire whenever any one of them fires. The larger gray circle represents the lateral inhibition between the auxiliary neurons.</p

    Implementation 5 for the Bayesian network shown in <b>Fig. 1B</b>.

    No full text
    <p>Implementation 5 is the implementation with dendritic computation that is based on the factorized expansion of the log-odd ratio. RV occurs in two factors, and , and therefore receives synaptic inputs from and on separate groups of dendritic branches. Altogether the synaptic connections of this network of spiking neurons implement the graph structure of <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1002294#pcbi-1002294-g001" target="_blank">Fig. 1D</a>.</p

    Results of Computer Simulation II.

    No full text
    <p>Probabilistic inference in the ASIA network with networks of spiking neurons that use different shapes of EPSPs. The simulated neural networks correspond to Implementation 2. The evidence is changed at from to (by clamping the x-ray test RV to 1). The probabilistic inference query is to estimate marginal posterior probabilities , , and . <b>A</b>) The ASIA Bayesian network. <b>B</b>) The three different shapes of EPSPs, an alpha shape (green curve), a smooth plateau shape (blue curve) and the optimal rectangular shape (red curve). <b>C</b>) and <b>D</b>) Estimated marginal probabilities for each of the diseases, calculated from the samples generated during the first 800 ms of the simulation with alpha shaped (green bars), plateau shaped (blue bars) and rectangular (red bars) EPSPs, compared with the corresponding correct marginal posterior probabilities (black bars), for in C) and in D). The results are averaged over 20 simulations with different random initial conditions. The error bars show the unbiased estimate of the standard deviation. <b>E</b>) and <b>F</b>) The sum of the Kullback-Leibler divergences between the correct and the estimated marginal posterior probability for each of the diseases using alpha shaped (green curve), plateau shaped (blue curve) and rectangular (red curve) EPSPs, for in E) and in F). The results are averaged over 20 simulation trials, and the light green and light blue areas show the unbiased estimate of the standard deviation for the green and blue curves respectively (the standard deviation for the red curve is not shown). The estimated marginal posteriors are calculated at each time point from the gathered samples from the beginning of the simulation (or from for the second inference query with ).</p

    Values for the probabilities , and in the ASIA Bayesian network used in Computer Simulation II.

    No full text
    <p>Values for the probabilities , and in the ASIA Bayesian network used in Computer Simulation II.</p

    Results of Computer Simulation III.

    No full text
    <p>Neural emulation of probabilistic inference through neural sampling in the fairly large and complex randomly chosen Bayesian network shown in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1002294#pcbi-1002294-g009" target="_blank">Fig. 9</a>. <b>A</b>) The sum of the Kullback-Leibler divergences between the correct and the estimated marginal posterior probability for each of the unobserved random variables , calculated from the generated samples (spikes) from the beginning of the simulation up to the current time indicated on the x-axis, for simulations with a neuron model with relative refractory period. Separate curves with different colors are shown for each of the 10 trials with different initial conditions (randomly chosen). The bold black curve corresponds to the simulation for which the spiking activity is shown in C) and D). <b>B</b>) As in A) but the mean over the 10 trials is shown, for simulations with a neuron model with relative refractory period (solid curve) and absolute refractory period (dashed curve.). The gray area around the solid curve shows the unbiased estimate of the standard deviation calculated over the 10 trials. <b>C</b>) and <b>D</b>) The spiking activity of the 12 principal neurons during the simulation from to , for one of the 10 simulations (neurons with relative refractory period). The neural network enters and remains in different network states (indicated by different colors), corresponding to different modes of the posterior probability distribution.</p

    Implementation 3 for the same explaining away motif as in <b>Fig. 2</b>.

    No full text
    <p>Implementation 3 is the neural implementation with dendritic computation that uses the Markov blanket expansion of the log-odd ratio. The principal neuron has 4 dendritic branches, one for each possible assignment of values to the RVs and in the Markov blanket of . The dendritic branches of neuron receive synaptic inputs from the principal neurons and either directly, or via an interneuron (analogously as in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1002294#pcbi-1002294-g002" target="_blank">Fig. 2</a>). It is required that at any moment in time exactly one of the dendritic branches (that one, whose index agrees with the current firing states of and ) generates dendritic spikes, whose amplitude at the soma determines the current firing probability of .</p

    Values for the conditional probabilities in the Bayesian network in Fig. 1B used in Computer Simulation I.

    No full text
    <p>Values for the conditional probabilities in the Bayesian network in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1002294#pcbi-1002294-g001" target="_blank">Fig. 1B</a> used in Computer Simulation I.</p
    corecore