982 research outputs found
The tame-wild principle for discriminant relations for number fields
Consider tuples of separable algebras over a common local or global number
field, related to each other by specified resolvent constructions. Under the
assumption that all ramification is tame, simple group-theoretic calculations
give best possible divisibility relations among the discriminants. We show that
for many resolvent constructions, these divisibility relations continue to hold
even in the presence of wild ramification.Comment: 31 pages, 11 figures. Version 2 fixes a normalization error: |G| is
corrected to n in Section 7.5. Version 3 fixes an off-by-one error in Section
6.
Towards a Unified Computational Model of Contextual Interactions across Visual Modalities
The perception of a stimulus is largely determined by its surrounding. Examples abound from color (Land and McCann, 1971), disparity (Westheimer, 1986) and motion induction (Anstis and Casco, 2006) to orientation tilt effects (OâToole and Wenderoth, 1976). Some of these phenomena have been studied individually using monkey neurophysiology techniques. In these experiments, a center stimulus is typically used to probe a cellâs classical âcenterâ receptive field (cRF), whose activity is then modulated by an annular âsurroundâ (extra-cRF) stimulus. While this center-surround integration (CSI) has been well characterized, a theoretical framework which unifies these different phenomena across visual modalities is lacking. Here, we present an extension of a popular cortical inhibition circuit, divisive normalization (Carandini and Heeger, 2011), which yields a computational model that is consistent with experimental data across visual modalities. We have found that a common characteristic of CSI across modalities is a shift in neural population responses induced by surround activity. Typical implementations of the divisive normalization model rely on gain control mechanisms from an âuntunedâ suppressive pool of cells; that is, the identity of that pool is the same for every cell being suppressed. As such, the circuit cannot account for the selective shift in population response curves observed in contextual effects. In contrast, we show that the addition of an extra-classical suppressive âtunedâ pool of cells which selectively inhibits different parts of a population response curve is sufficient to explain complex shifts in population tuning responses. Overall, our results suggest that a normalization circuit based on two forms of inhibition, gain control and selective suppression, captures shifts in population responses associated with CSI and yields a model that seems consistent with contextual phenomena across visual modalities
Towards a Unified Model of Classical and Extra-Classical Receptive Fields
One of the major goals in neuroscience is to understand how the cortex processes information. A substantial effort has thus gone into mapping classical receptive fields (cRF) across areas of the visual cortex and characterizing input-output relationships through linear-nonlinear response functions. Recently, there has been a lot of interest in mapping the extra-classical receptive field (extra-cRF) as well, by using contextual stimuli. The extra-cRF is a region outside the cRF that modulates a cellâs response but that is incapable of driving it on its own. However, existing models typically focus on one particular visual modality (form, motion, disparity or color), and do not offer a coherent computational role for the extra-cRF. Meanwhile, because of the sheer diversity of contextual effects, we still lack a single model that is consistent with the known anatomy and physiology of the visual cortex.
Here, we present an integrated computational model of early vision that comprehensively describes neural responses in the primary visual cortex across modalities (form, motion, disparity and color). The basic circuit combines âuntunedâ recurrent connections within the cRF with âtunedâ recurrent interactions in the extra-cRF. The circuit offers a characterization of disparate contextual phenomena across visual modalities as general induction phenomena. We show that the resulting circuit seems sufficient to capture the extent of psychophysical data on color constancy â offering a possible computational-level justification for the observed center-surround interactions
Complete intersections and mod p cochains
We give homotopy invariant definitions corresponding to three well known
properties of complete intersections, for the ring, the module theory and the
endomorphisms of the residue field, and we investigate them for the mod p
cochains on a space, showing that suitable versions of the second and third are
equivalent and that the first is stronger. We are particularly interested in
classifying spaces of groups, and we give a number of examples.
This paper follows on from arXiv:0906.4025 which considered the classical
case of a commutative ring and arXiv:0906.3247 which considered the case of
rational homotopy theory.Comment: To appear in AG
Neural representation of action sequences: how far can a simple snippet-matching model take us?
The macaque Superior Temporal Sulcus (STS) is a brain area that receives and integrates inputs from both the ventral and dorsal visual processing streams (thought to specialize in form and motion processing respectively). For the processing of articulated actions, prior work has shown that even a small population of STS neurons contains sufficient information for the decoding of actor invariant to action, action invariant to actor, as well as the specific conjunction of actor and action. This paper addresses two questions. First, what are the invariance properties of individual neural representations (rather than the population representation) in STS? Second, what are the neural encoding mechanisms that can produce such individual neural representations from streams of pixel images? We find that a baseline model, one that simply computes a linear weighted sum of ventral and dorsal responses to short action âsnippetsâ, produces surprisingly good fits to the neural data. Interestingly, even using inputs from a single stream, both actor-invariance and action-invariance can be produced simply by having different linear weights
- âŠ