26 research outputs found

    Human spontaneous gaze patterns in viewing of faces of different species

    Get PDF
    Human studies have reported clear differences in perceptual and neural processing of faces of different species, implying the contribution of visual experience to face perception. Can these differences be manifested in our eye scanning patterns while extracting salient facial information? Here we systematically compared non-pet owners’ gaze patterns while exploring human, monkey, dog and cat faces in a passive viewing task. Our analysis revealed that the faces of different species induced similar patterns of fixation distribution between left and right hemi-face, and among key local facial features with the eyes attracting the highest proportion of fixations and viewing times, followed by the nose and then the mouth. Only the proportion of fixation directed at the mouth region was species-dependent and could be differentiated at the earliest stage of face viewing. It seems that our spontaneous eye scanning patterns associated with face exploration were mainly constrained by general facial configurations; the species affiliation of the inspected faces had limited impact on gaze allocation, at least under free viewing conditions

    The Effects of Internal Representations on Performance and Fluidity in a Motor Task

    Get PDF
    Individuals can differ in the mode in which they experience conscious thought. These differences in visualisation and verbalisation can also be evident during motor control. The Internal Representation Questionnaire (IRQ) was developed to measure propensity to engage certain types of representations, but its ability to predict motor control and links to reinvestment and motor imagery have not been tested. 159 included participants completed the IRQ, movement specific reinvestment scale (MSRS), and a novel online motor task before and after a period of practice. Results showed that the IRQ Verbal and Orthographic factors were significant predictors of scores on the MSRS. The IRQ factor of Manipulational Representations predicted motor performance both before and after practice. The fluidity of executed movements were predicted by the IRQ verbalisation factor where higher propensity to verbalise was associated with higher levels of jitter, but only after a period of practice. Results suggest there may be some informative conceptual overlap between internal verbalisations and reinvestment and that the propensity to manipulate internal representations may be predictive of motor performance in new tasks. The IRQ has potential to be a valuable tool for predicting motor performance

    I know you are beautiful even without looking at you: discrimination of facial beauty in peripheral vision

    Get PDF
    Prior research suggests that facial attractiveness may capture attention at parafovea. However, little is known about how well facial beauty can be detected at parafoveal and peripheral vision. Participants in this study judged relative attractiveness of a face pair presented simultaneously at several eccentricities from the central fixation. The results show that beauty is not only detectable at parafovea but also at periphery. The discrimination performance at parafovea was indistinguishable from the performance around the fovea. Moreover, performance was well above chance even at the periphery. The results show that the visual system is able to use the low spatial frequency information to appraise attractiveness. These findings not only provide an explanation for why a beautiful face could capture attention when central vision is already engaged elsewhere, but also reveal the potential means by which a crowd of faces is quickly scanned for attractiveness

    Social interactions through the eyes of macaques and humans

    Get PDF
    Group-living primates frequently interact with each other to maintain social bonds as well as to compete for valuable resources. Observing such social interactions between group members provides individuals with essential information (e.g. on the fighting ability or altruistic attitude of group companions) to guide their social tactics and choice of social partners. This process requires individuals to selectively attend to the most informative content within a social scene. It is unclear how non-human primates allocate attention to social interactions in different contexts, and whether they share similar patterns of social attention to humans. Here we compared the gaze behaviour of rhesus macaques and humans when free-viewing the same set of naturalistic images. The images contained positive or negative social interactions between two conspecifics of different phylogenetic distance from the observer; i.e. affiliation or aggression exchanged by two humans, rhesus macaques, Barbary macaques, baboons or lions. Monkeys directed a variable amount of gaze at the two conspecific individuals in the images according to their roles in the interaction (i.e. giver or receiver of affiliation/aggression). Their gaze distribution to non-conspecific individuals was systematically varied according to the viewed species and the nature of interactions, suggesting a contribution of both prior experience and innate bias in guiding social attention. Furthermore, the monkeys’ gaze behavior was qualitatively similar to that of humans, especially when viewing negative interactions. Detailed analysis revealed that both species directed more gaze at the face than the body region when inspecting individuals, and attended more to the body region in negative than in positive social interactions. Our study suggests that monkeys and humans share a similar pattern of role-sensitive, species- and context-dependent social attention, implying a homologous cognitive mechanism of social attention between rhesus macaques and humans

    Processing time not modality dominates shift costs in the modality-shifting effect

    Get PDF
    Shifting attention between visual and auditory targets is associated with reaction time costs, known as the modality-shifting effect. The type of modality shifted from e.g., auditory or visual is suggested to have an effect on the degree of cost. Studies report greater costs shifting from visual stimuli, yet notably used visual stimuli that are also identified slower than the auditory. It is not clear whether the cost is specific to modality effects, or with identification speed independent of modality. Here, in order to interpret whether the effects are due to modality or identification time, switch costs are instead compared with auditory stimuli that are identified slower than the visual (inverse of tested previously). A second condition used the same auditory stimuli at a low intensity, allowing comparison of semantically identical stimuli that are even slower to process. The current findings contradicted suggestions of a general difficulty in shifting from visual stimuli (as previously reported), and instead suggest that cost is reduced when targets are preceded by a more rapidly processed stimulus. ‘Modality-Shifting’ as it is often termed induces shifting costs, but the costs are not because of a change of modality per se, but because of a change in identification speed, where the degree of cost is dependent on the processing time of the surrounding stimuli

    sounds

    No full text

    data

    No full text

    markdown_manuscript

    No full text

    visual

    No full text

    stimuli

    No full text
    corecore