128,927 research outputs found

    Selection Using Non-symmetric Context Areas

    Full text link

    Yukawa couplings and masses of non-chiral states for the Standard Model on D6-branes on T6/Z6'

    Full text link
    The perturbative leading order open string three-point couplings for the Standard Model with hidden USp(6) on fractional D6-branes on T6/Z6' from arXiv:0806.3039 [hep-th], arXiv:0910.0843 [hep-th] are computed. Physical Yukawa couplings consisting of holomorphic Wilsonian superpotential terms times a non-holomorphic prefactor involving the corresponding classical open string Kaehler metrics are given, and mass terms for all non-chiral matter states are derived. The lepton Yukawa interactions are at leading order flavour diagonal, while the quark sector displays a more intricate pattern of mixings. While N=2 supersymmetric sectors acquire masses via only two D6-brane displacements - which also provide the hierarchies between up- and down-type Yukawas within one quark or lepton generation -, the remaining vector-like states receive masses via perturbative three-point couplings to some Standard Model singlet fields with vevs along flat directions. Couplings to the hidden sector and messengers for supersymmetry breaking are briefly discussed.Comment: 52 pages (including 8p. appendix); 5 figures; 14 tables; v2: discussion in section 4.1.3 extended, footnote 5 added, typos corrected, accepted by JHE

    Cortical Dynamics of Contextually-Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    Full text link
    How do humans use predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, a certain combination of objects can define a context for a kitchen and trigger a more efficient search for a typical object, such as a sink, in that context. A neural model, ARTSCENE Search, is developed to illustrate the neural mechanisms of such memory-based contextual learning and guidance, and to explain challenging behavioral data on positive/negative, spatial/object, and local/distant global cueing effects during visual search. The model proposes how global scene layout at a first glance rapidly forms a hypothesis about the target location. This hypothesis is then incrementally refined by enhancing target-like objects in space as a scene is scanned with saccadic eye movements. The model clarifies the functional roles of neuroanatomical, neurophysiological, and neuroimaging data in visual search for a desired goal object. In particular, the model simulates the interactive dynamics of spatial and object contextual cueing in the cortical What and Where streams starting from early visual areas through medial temporal lobe to prefrontal cortex. After learning, model dorsolateral prefrontal cortical cells (area 46) prime possible target locations in posterior parietal cortex based on goalmodulated percepts of spatial scene gist represented in parahippocampal cortex, whereas model ventral prefrontal cortical cells (area 47/12) prime possible target object representations in inferior temporal cortex based on the history of viewed objects represented in perirhinal cortex. The model hereby predicts how the cortical What and Where streams cooperate during scene perception, learning, and memory to accumulate evidence over time to drive efficient visual search of familiar scenes.CELEST, an NSF Science of Learning Center (SBE-0354378); SyNAPSE program of Defense Advanced Research Projects Agency (HR0011-09-3-0001, HR0011-09-C-0011
    corecore