12 research outputs found

    Pooling of first-order inputs in second-order vision

    Get PDF
    The processing of texture patterns has been characterized by a model that first filters the image to isolate one texture, then applies a rectifying nonlinearity that converts texture variation into intensity variation, and finally processes the resulting pattern with mechanisms similar to those used in processing luminance-defined images (spatial-frequency- and orientation-tuned filters). This model, known as FRF for filter rectify filter, has the appeal of explaining sensitivity to second-order patterns in terms of mechanisms known to exist for processing first-order patterns. This model implies an unexpected interaction between the first and second stages of filtering; if the first-stage filter consists of narrowband mechanisms tuned to detect the carrier texture, then sensitivity to high-frequency texture modulations should be much lower than is observed in humans. We propose that the human visual system must pool over first-order channels tuned to a wide range of spatial frequencies and orientations to achieve texture demodulation, and provide psychophysical evidence for pooling in a cross-carrier adaptation experiment and in an experiment that measures modulation contrast sensitivity at very low first-order contrast. 2 1st filter rectification 2nd filter Figure 1: Schematic FRF model. The first stage consists of a bank of linear filters selective for one of the image’s carrier textures. Their responses are then rectified, creating a texture-intensity image. Finally, this texture-intensity image is processed by typical spatial-frequency- and orientation-tuned linear filters to detect any texture modulation.

    Neuron-specific:

    No full text
    Adaptation to an oriented pattern leads to orientation-tuned suppression and repulsion of tuning curves in V1 Neural response on i orientation 90-at ent ior mul us i St ( = adapting orientation) Tuning changes characterized as

    Schematic representation of design matrix construction.

    No full text
    <p>Here, the circles and hexagons correspond to different species. Bidirectional arrows represent orthology information and dotted arrows represent putative interactions between TFs and genes. Rectangles under “Data” represent TF × condition matrices of gene expression values in species 1 (top row) and species 2 (bottom row), colored in correspondence with the gene orthology diagram, with bidirectional arrows representing orthology between the two species. Rectangles under “Weights” represent gene regulatory interactions in each species, with lines linking coefficients that are fused due to the orthology information shown to the left. Networks associated with genes in each species can be solved independently unless there exist a fusion constraint constraining coefficients of each gene, or a path of such constraints. When a path does exist, these genes must be solved simultaneously. In this example, genes <i>B</i>, <i>C</i>, and <i>B</i>′ must be solved simultaneously; the lower right corner shows a representation of the design matrix necessary to solve the this fused regression problem.</p

    Demonstrates the integration of transcription factor activity (TFA) in fused network inference.

    No full text
    <p>The procedure was identical to <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1005157#pcbi.1005157.g005" target="_blank">Fig 5a</a>, except for the additional pre-processing step of transforming transcription factor abundances into an estimate of their activity (see <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1005157#sec011" target="_blank">Methods</a>). We then compared performance on the main <i>B. Subtilis</i> strain with and without fusion to the second strain, and with and without TFA. TFA outperforms both fused-L2 and unfused inference based on transcription factor abundance, but TFA combined with fusion dramatically outperforms all three methods.</p

    Demonstration of the application of fused-L2 to intra-species network inference <i>B. subtilis</i>.

    No full text
    <p>In each example, λ<sub><i>R</i></sub> is optimized separately without fusion and 10-fold cross validation is used when fitting networks (although, in <b>A</b> the gold-standard was not used in fitting the network and did not vary across folds) <b>A.</b> We compared the performance of independently fitting our main <i>B. Subtilis</i> dataset with two methods for incorporating data from another strain of <i>B. subtilis</i>. We evaluated performance on a gold-standard of known interactions. Adaptive fusion outperforms both an independently fitting the first <i>B. subtilis</i> dataset and fitting both <i>B. subtilis</i> datasets then rank-combining the results, as in Marbach et al. <b>B</b> We demonstrate the application of a prior based on operon membership. We generated fusion constraints between pairs of interactions for which both the TF and gene belonged to the same operon respectively. We then held out half of the gold-standard and used it as a prior on individual interactions, as in [<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1005157#pcbi.1005157.ref033" target="_blank">33</a>]. We fit the <i>B. subtilis</i> network with and without fusion, then evaluated on the remaining gold-standard. In this example, using fusion constraints to enforce a prior based on co-regulation of genes in the same operon improved network inference performance.</p

    Adaptive fusion loss function (A) and derivative of loss function (B).

    No full text
    <p><b>A.</b> Adaptive fusion is a quadratic around the origin, begins to taper at <i>a</i>/2, and plateaus at <i>a</i>. After the plateau, increasing the difference in interaction weight of fused interactions does not further affect the penalty incurred through fusion. As a result, interaction weights in this zone are effectively unfused from one another (the fusion penalty behaves like a constant). <b>B.</b> Shows the derivative of the adaptive fusion penalty, which is used to implement adaptive fusion through local quadratic approximation. The adaptive fusion penalty is modified from SCAD (smoothly clipped absolute deviation) and MCP (minimax concave penalty) functions and like these penalties has a zero derivative far from the origin.</p
    corecore