60 research outputs found

    EXPRESS: Post-Migration Living Difficulties and Poor Mental Health Associated with Increased Interpretation Bias for Threat

    Get PDF
    Previous research has found associations between mental health difficulties and interpretation biases, including heightened interpretation of threat from neutral or ambiguous stimuli. Building on this research, we explored associations between interpretation biases (positive and negative) and three constructs that have been linked to migrant experience: mental health symptoms (GSI), post-migration living difficulties (PMLD) and perceived ethnic discrimination (PEDQ). Two hundred and thirty students who identified as first- (n = 94) or second-generation ethnic minority migrants (n = 68), and first-generation White migrants (n = 68) completed measures of GSI, PEDQ, and PMLD. They also performed an interpretation bias task using Point-Light Walkers (PLW), dynamic stimuli with reduced visual input that are easily perceived as humans performing an action. Five categories of PLW were used: four that clearly depicted human forms undertaking positive, neutral, negative, or ambiguous actions, and a fifth that involved scrambled animations with no clear action or form. Participants were asked to imagine their interaction with the stimuli and rate their friendliness (positive interpretation bias) and aggressiveness (interpretation bias for threat). We found that the three groups differed on PEDQ and PMLD, with no significant differences in GSI, and the three measured were positively correlated. Poorer mental health and increased PMLD were associated with a heightened interpretation for threat of scrambled animations only. These findings have implications for understanding of the role of threat biases in mental health and the migrant experience

    Temporal order judgements of dynamic gaze stimuli reveal a postdictive prioritisation of averted over direct shifts

    Get PDF
    We studied temporal order judgements (TOJs) of gaze shift behaviours and evaluated the impact of gaze direction (direct and averted gaze) and face context information (both eyes set within a single face or each eye within two adjacent hemifaces) on TOJ performance measures. Avatar faces initially gazed leftwards or rightwards (Starting Gaze Direction). This was followed by sequential and independent left and right eye gaze shifts with various amounts of stimulus onset asynchrony. Gaze shifts could be either Matching (both eyes end up pointing direct or averted) or Mismatching (one eye ends up pointing direct, the other averted). Matching shifts revealed an attentional cueing mechanism, where TOJs were biased in favour of the eye lying in the hemispace cued by the avatar’s Starting Gaze Direction. For example, the left eye was more likely to be judged as shifting first when the avatar initially gazed toward the left side of the screen. Mismatching shifts showed biased TOJs in favour of the eye performing the averted shift, but only in the context of two separate hemifaces that does not violate expectations of directional gaze shift congruency. This suggests a postdictive inferential strategy that prioritises eye movements based on the type of gaze shift, independently of where attention is initially allocated. Averted shifts are prioritised over direct, as these might signal the presence of behaviourally relevant information in the environment

    Personality Traits Do Not Predict How We Look at Faces

    Get PDF
    International audienceWhile personality has typically been considered to influence gaze behaviour, literature relating to the topic is mixed. Previously, we found no evidence of self-reported personality traits on preferred gaze duration between a participant and a person looking at them via a video. In this study, 77 of the original participants answered an in-depth follow-up survey containing a more comprehensive assessment of personality traits (Big Five Inventory) than was initially used, to check whether earlier findings were caused by the personality measure being too coarse. In addition to preferred mutual gaze duration, we also examined two other factors linked to personality traits: number of blinks and total fixation duration in the eye region of observed faces. No significant correlations were found between any of these measures and participant personality traits. We suggest that effects previously reported in the literature may stem from contextual differences or modulation of arousal

    In the eye of the beholder? Oxytocin effects on eye movements in schizophrenia

    Get PDF
    Background Individuals with schizophrenia have difficulty in extracting salient information from faces. Eye-tracking studies have reported that these individuals demonstrate reduced exploratory viewing behaviour (i.e. reduced number of fixations and shorter scan paths) compared to healthy controls. Oxytocin has previously been demonstrated to exert pro-social effects and modulate eye gaze during face exploration. In this study, we tested whether oxytocin has an effect on visual attention in patients with schizophrenia. Methods Nineteen male participants with schizophrenia received intranasal oxytocin 40UI or placebo in a double-blind, placebo-controlled, crossover fashion during two visits separated by seven days. They engaged in a free-viewing eye-tracking task, exploring images of Caucasian men displaying angry, happy, and neutral emotional expressions; and control images of animate and inanimate stimuli. Eye-tracking parameters included: total number of fixations, mean duration of fixations, dispersion, and saccade amplitudes. Results We found a main effect of treatment, whereby oxytocin increased the total number of fixations, dispersion, and saccade amplitudes, while decreasing the duration of fixations compared to placebo. This effect, however, was non-specific to facial stimuli. When restricting the analysis to facial images only, we found the same effect. In addition, oxytocin modulated fixation rates in the eye and nasion regions. Discussion This is the first study to explore the effects of oxytocin on eye gaze in schizophrenia. Oxytocin had enhanced exploratory viewing behaviour in response to both facial and inanimate control stimuli. We suggest that the acute administration of intranasal oxytocin may have the potential to enhance visual attention in schizophrenia

    Genetic algorithms reveal identity independent representation of emotional expressions

    Get PDF
    People readily and automatically process facial emotion and identity, and it has been reported that these cues are processed both dependently and independently. However, this question of identity independent encoding of emotions has only been examined using posed, often exaggerated expressions of emotion, that do not account for the substantial individual differences in emotion recognition. In this study, we ask whether people's unique beliefs of how emotions should be reflected in facial expressions depend on the identity of the face. To do this, we employed a genetic algorithm where participants created facial expressions to represent different emotions. Participants generated facial expressions of anger, fear, happiness, and sadness, on two different identities. Facial features were controlled by manipulating a set of weights, allowing us to probe the exact positions of faces in high-dimensional expression space. We found that participants created facial expressions belonging to each identity in a similar space that was unique to the participant, for angry, fearful, and happy expressions, but not sad. However, using a machine learning algorithm that examined the positions of faces in expression space, we also found systematic differences between the two identities' expressions across participants. This suggests that participants' beliefs of how an emotion should be reflected in a facial expression are unique to them and identity independent, although there are also some systematic differences in the facial expressions between two identities that are common across all individuals. (PsycInfo Database Record (c) 2023 APA, all rights reserved)

    Intact priors for gaze direction in adults with high-functioning autism spectrum conditions

    Get PDF
    This research was supported by the UK Medical Research Council under project code MC-A060-5PQ50 (Andrew J. Calder). IM was supported by a Leverhulme Trust Project Grant. CC was supported by an Australian Research Council Future Fellowship. SBC was supported by the MRC, the Wellcome Trust and the Autism Research Trust during the period of this work. The research was also supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care East of England at Cambridgeshire and Peterborough NHS Foundation Trust

    Face exploration dynamics differentiate men and women

    Get PDF
    The human face is central to our everyday social interactions. Recent studies have shown that while gazing at faces, each one of us has a particular eyescanning pattern, highly stable across time. Although variables such as culture or personality have been shown to modulate gaze behavior, we still don't know what shapes these idiosyncrasies. Moreover, most previous observations rely on static analyses of small-sized eyeposition data sets averaged across time. Here, we probe the temporal dynamics of gaze to explore what information can be extracted about the observers and what is being observed. Controlling for any stimuli effect, we demonstrate that among many individual characteristics, the gender of both the participant (gazer) and the person being observed (actor) are the factors that most influence gaze patterns during face exploration. We record and exploit the largest set of eyetracking data (405 participants, 58 nationalities) from participants watching videos of another person. Using novel data-mining techniques, we show that female gazers follow a much more exploratory scanning strategy than males. Moreover, female gazers watching female actresses look more at the eye on the left side. These results have strong implications in every field using gazebased models from computer vision to clinical psychology. Introduction Our eyes constantly move around to place our highresolution fovea on the most relevant visual information. Arguably, one of the most important objects of regard is another person's face. Until recently, a majority of face perception studies have been pointing to a ''universal'' face exploration pattern: Humans systematically follow a triangular scanpath (sequence of fixations) over the eyes and the mouth of the presented face Methods and results Experiment This data set has been described and used in a pupillometry study Participants We recorded the gaze of 459 visitors to the Science Museum of London, UK. We removed from the analysis the data of participants under age 18 (n ¼ 8) as well as 46 other participants whose eye data exhibited some irregularities (loss of signal, obviously shifted positions). The analyses are performed on a final group of 405 participants (203 males, 202 females). Mean age of participants 30.8 years (SD ¼ 11.5; males: M ¼ 32.3, SD ¼ 12.3; females: M ¼ 29.3, SD ¼ 10.5). The experiment was approved by the UCL research ethics committee and by the London Science Museum ethics board, and the methods were carried out in accordance with the approved guidelines. Signed informed consent was obtained from all participants. Stimuli Stimuli consisted of video clips of eight different actors (four females, four males, see Apparatus The experimental setup consisted of four computers: two for administering the personality questionnaire and two dedicated to the eye-tracking experiment and actor face-rating questionnaire (see Procedure). Each setup consisted of a stimulus presentation PC (Dell precision T3500 and Dell precision T3610) hooked up to a 19-in. LCD monitor (both 1280 3 1024 pixels, 49.98 3 39.98 of visual angle) at 60 Hz and an EyeLink 1000 kit (http:// www.sr-research.com/). Eye-tracking data was collected at 250 Hz. Participants sat 57 cm from the monitor, their head stabilized with a chin rest, forehead rest, and headband. A protective opaque white screen encased the monitor and part of the participant's head in order to shield the participant from environmental distractions. Procedure The study took place at the Live Science Pod in the Who Am I? exhibition of the London Science Museum. Journal of Vision (2016) 16(14):16, 1-19 Coutrot et al. 2 Downloaded from jov.arvojournals.org on 07/01/2019 The room had no windows, and the ambient luminance was very stable across the experiment. It consisted of three phases for a total duration of approximately 15 min. Phase 1 was a 10-item personality questionnaire based on the Big Five personality inventory The dispersion is the mean Euclidian distance between the eye positions of the same observers for a given clip. Small dispersion values reflect clustered eye positions. Scanpath modeling Hidden Markov models To grasp the highly dynamic and individualistic components of gaze behavior, we model participant's scanpaths using hidden Markov models (HMMs

    Face exploration dynamics differentiate men and women

    Get PDF
    The human face is central to our everyday social interactions. Recent studies have shown that while gazing at faces, each one of us has a particular eyescanning pattern, highly stable across time. Although variables such as culture or personality have been shown to modulate gaze behavior, we still don’t know what shapes these idiosyncrasies. Moreover, most previous observations rely on static analyses of small-sized eyeposition data sets averaged across time. Here, we probe the temporal dynamics of gaze to explore what information can be extracted about the observers and what is being observed. Controlling for any stimuli effect, we demonstrate that among many individual characteristics, the gender of both the participant (gazer) and the person being observed (actor) are the factors that most influence gaze patterns during face exploration.We record and exploit the largest set of eyetracking data (405 participants, 58 nationalities) from participants watching videos of another person. Using novel data-mining techniques, we show that female gazers follow a much more exploratory scanning strategy than males. Moreover, female gazers watching female actresses look more at the eye on the left side. These results have strong implications in every field using gazebased models from computer vision to clinical psychology

    Time-order errors in duration judgment are independent of spatial positioning

    Get PDF
    Time-order errors (TOEs) occur when the discriminability between two stimuli are affected by the order in which they are presented. While TOEs have been studied since the 1860s, it is unknown whether the spatial properties of a stimulus will affect this temporal phenomenon. In this experiment, we asked whether perceived duration, or duration discrimination, might be influenced by whether two intervals in a standard two-interval method of constants paradigm were spatially overlapping in visual short-term memory. Two circular sinusoidal gratings (one standard and the other a comparison) were shown sequentially and participants judged which of the two was presented for a longer duration. The test stimuli were either spatially overlapping (in different spatial frames) or separate. Stimulus order was randomized between trials. The standard stimulus lasted 600 ms, and the test stimulus had one of seven possible values (between 300 and 900 ms). There were no overall significant differences observed between spatially overlapping and separate stimuli. However, in trials where the standard stimulus was presented second, TOEs were greater, and participants were significantly less sensitive to differences in duration. TOEs were also greater in conditions involving a saccade. This suggests there is an intrinsic memory component to two interval tasks in that the information from the first interval has to be stored; this is more demanding when the standard is presented in the second interval. Overall, this study suggests that while temporal information may be encoded in some spatial form, it is not dependent on visual short-term memory

    Attentional modulation of crowding

    Get PDF
    Outside the fovea, the visual system pools features of adjacent stimuli. Left or right of fixation the tilt of an almost horizontal Gabor pattern becomes difficult to classify when horizontal Gabors appear above and below it. Classification is even harder when flankers are to the left and right of the target. With all four flankers present, observers were required both to classify the target’s tilt and perform a spatial frequency task on two of the four flankers. This dual task proved significantly more difficult when attention was directed to the horizontally aligned flankers. We suggest that covert attention to stimuli can increase the weights of their pooled features
    • …
    corecore