123 research outputs found

    Dynamics of trimming the content of face representations for categorization in the brain

    Get PDF
    To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) Over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) Concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g. the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g. the wide opened eyes in 'fear'; the detailed mouth in 'happy'). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300

    Context and Crowding in Perceptual Learning on a Peripheral Contrast Discrimination Task: Context-Specificity in Contrast Learning

    Get PDF
    Perceptual learning is an improvement in sensitivity due to practice on a sensory task and is generally specific to the trained stimuli and/or tasks. The present study investigated the effect of stimulus configuration and crowding on perceptual learning in contrast discrimination in peripheral vision, and the effect of perceptual training on crowding in this task. 29 normally-sighted observers were trained to discriminate Gabor stimuli presented at 9° eccentricity with either identical or orthogonally oriented flankers with respect to the target (ISO and CROSS, respectively), or on an isolated target (CONTROL). Contrast discrimination thresholds were measured at various eccentricities and target-flanker separations before and after training in order to determine any learning transfer to untrained stimulus parameters. Perceptual learning was observed in all three training stimuli; however, greater improvement was obtained with training on ISO-oriented stimuli compared to CROSS-oriented and unflanked stimuli. This learning did not transfer to untrained stimulus configurations, eccentricities or target-flanker separations. A characteristic crowding effect was observed increasing with viewing eccentricity and decreasing with target-flanker separation before and after training in both configurations. The magnitude of crowding was reduced only at the trained eccentricity and target-flanker separation; therefore, learning for contrast discrimination and for crowding in the present study was configuration and location specific. Our findings suggest that stimulus configuration plays an important role in the magnitude of perceptual learning in contrast discrimination and suggest context-specificity in learning

    Multisensory Perceptual Learning of Temporal Order: Audiovisual Learning Transfers to Vision but Not Audition

    Get PDF
    Background: An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Methodology/Principal Findings: Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes

    Feedback training for facial image comparison

    Get PDF
    People are typically poor at matching the identity of unfamiliar faces from photographs. This observation has broad implications for face matching in operational settings (e.g., border control). Here, we report significant improvements in face matching ability following feedback training. In Experiment 1, we show cumulative improvement in performance on a standard test of face matching ability when participants were provided with trial-by-trial feedback. More important, Experiment 2 shows that training benefits can generalize to novel, widely varying, unfamiliar face images for which no feedback is provided. The transfer effect specifically benefited participants who had performed poorly on an initial screening test. These findings are discussed in the context of existing literature on unfamiliar face matching and perceptual training. Given the reliability of the performance enhancement and its generalization to diverse image sets, we suggest that feedback training may be useful for face matching in occupational settings

    Task and spatial frequency modulations of object processing: an EEG study.

    Get PDF
    Visual object processing may follow a coarse-to-fine sequence imposed by fast processing of low spatial frequencies (LSF) and slow processing of high spatial frequencies (HSF). Objects can be categorized at varying levels of specificity: the superordinate (e.g. animal), the basic (e.g. dog), or the subordinate (e.g. Border Collie). We tested whether superordinate and more specific categorization depend on different spatial frequency ranges, and whether any such dependencies might be revealed by or influence signals recorded using EEG. We used event-related potentials (ERPs) and time-frequency (TF) analysis to examine the time course of object processing while participants performed either a grammatical gender-classification task (which generally forces basic-level categorization) or a living/non-living judgement (superordinate categorization) on everyday, real-life objects. Objects were filtered to contain only HSF or LSF. We found a greater positivity and greater negativity for HSF than for LSF pictures in the P1 and N1 respectively, but no effects of task on either component. A later, fronto-central negativity (N350) was more negative in the gender-classification task than the superordinate categorization task, which may indicate that this component relates to semantic or syntactic processing. We found no significant effects of task or spatial frequency on evoked or total gamma band responses. Our results demonstrate early differences in processing of HSF and LSF content that were not modulated by categorization task, with later responses reflecting such higher-level cognitive factors

    Combining S-cone and luminance signals adversely affects discrimination of objects within backgrounds

    Get PDF
    The visual system processes objects embedded in complex scenes that vary in both luminance and colour. In such scenes, colour contributes to the segmentation of objects from backgrounds, but does it also affect perceptual organisation of object contours which are already defined by luminance signals, or are these processes unaffected by colour’s presence? We investigated if luminance and chromatic signals comparably sustain processing of objects embedded in backgrounds, by varying contrast along the luminance dimension and along the two cone-opponent colour directions. In the first experiment thresholds for object/non-object discrimination of Gaborised shapes were obtained in the presence and absence of background clutter. Contrast of the component Gabors was modulated along single colour/luminance dimensions or co-modulated along multiple dimensions simultaneously. Background clutter elevated discrimination thresholds only for combined S-(L + M) and L + M signals. The second experiment replicated and extended this finding by demonstrating that the effect was dependent on the presence of relatively high S-(L + M) contrast. These results indicate that S-(L + M) signals impair spatial vision when combined with luminance. Since S-(L + M) signals are characterised by relatively large receptive fields, this is likely to be due to an increase in the size of the integration field over which contour-defining information is summed

    Observation of Electroweak Production of a Same-Sign W Boson Pair in Association with Two Jets in pp Collisions root s=13 TeV with the ATLAS Detector

    Get PDF
    This Letter presents the observation and measurement of electroweak production of a same-sign W boson pair in association with two jets using 36.1     fb − 1 of proton-proton collision data recorded at a center-of-mass energy of √ s = 13     TeV by the ATLAS detector at the Large Hadron Collider. The analysis is performed in the detector fiducial phase-space region, defined by the presence of two same-sign leptons, electron or muon, and at least two jets with a large invariant mass and rapidity difference. A total of 122 candidate events are observed for a background expectation of 69 ± 7 events, corresponding to an observed signal significance of 6.5 standard deviations. The measured fiducial signal cross section is σ fid = 2.89 + 0.51 − 0.48 ( stat ) + 0.29 − 0.28 ( syst )     fb

    Measurement of Azimuthal Anisotropy of Muons from Charm and Bottom Hadrons in pp Collisions at √s = 13 TeV with the ATLAS Detector

    Get PDF
    The elliptic flow of muons from the decay of charm and bottom hadrons is measured in p p collisions at √ s = 13     TeV using a data sample with an integrated luminosity of 150     pb − 1 recorded by the ATLAS detector at the LHC. The muons from heavy-flavor decay are separated from light-hadron decay muons using momentum imbalance between the tracking and muon spectrometers. The heavy-flavor decay muons are further separated into those from charm decay and those from bottom decay using the distance-of-closest-approach to the collision vertex. The measurement is performed for muons in the transverse momentum range 4–7 GeV and pseudorapidity range | η | < 2.4 . A significant nonzero elliptic anisotropy coefficient v 2 is observed for muons from charm decays, while the v 2 value for muons from bottom decays is consistent with zero within uncertainties

    Search for squarks and gluinos in final states with jets and missing transverse momentum using 139 fb−1 of s√ = 13 TeV pp collision data with the ATLAS detector

    Get PDF
    A search for the supersymmetric partners of quarks and gluons (squarks and gluinos) in final states containing jets and missing transverse momentum, but no electrons or muons, is presented. The data used in this search were recorded by the ATLAS experiment in proton-proton collisions at a centre-of-mass energy of s√ = 13 TeV during Run 2 of the Large Hadron Collider, corresponding to an integrated luminosity of 139 fb−1. The results are interpreted in the context of various R-parity-conserving models where squarks and gluinos are produced in pairs or in association and a neutralino is the lightest supersymmetric particle. An exclusion limit at the 95% confidence level on the mass of the gluino is set at 2.30 TeV for a simplified model containing only a gluino and the lightest neutralino, assuming the latter is massless. For a simplified model involving the strong production of mass-degenerate first- and second-generation squarks, squark masses below 1.85 TeV are excluded if the lightest neutralino is massless. These limits extend substantially beyond the region of supersymmetric parameter space excluded previously by similar searches with the ATLAS detector

    Properties of jet fragmentation using charged particles measured with the ATLAS detector in pp collisions at root s=13 TeV

    Get PDF
    This paper presents a measurement of quantities related to the formation of jets from high-energy quarks and gluons (fragmentation). Jets with transverse momentum 100 GeV 500 MeV and |η| < 2.5 are used to probe the detailed structure of the jet. The fragmentation properties of the more forward and the more central of the two leading jets from each event are studied. The data are unfolded to correct for detector resolution and acceptance effects. Comparisons with parton shower Monte Carlo generators indicate that existing models provide a reasonable description of the data across a wide range of phase space, but there are also significant differences. Furthermore, the data are interpreted in the context of quark- and gluon-initiated jets by exploiting the rapidity dependence of the jet flavor fraction. A first measurement of the charged-particle multiplicity using model-independent jet labels (topic modeling) provides a promising alternative to traditional quark and gluon extractions using input from simulation. The simulations provide a reasonable description of the quark-like data across the jet Pt range presented in -this measurement, but the gluon-like data have systematically fewer charged particles than the simulation
    corecore