93 research outputs found

    GlassesValidator: A data quality tool for eye tracking glasses

    Get PDF
    According to the proposal for a minimum reporting guideline for an eye tracking study by Holmqvist et al. (2022), the accuracy (in degrees) of eye tracking data should be reported. Currently, there is no easy way to determine accuracy for wearable eye tracking recordings. To enable determining the accuracy quickly and easily, we have produced a simple validation procedure using a printable poster and accompanying Python software. We tested the poster and procedure with 61 participants using one wearable eye tracker. In addition, the software was tested with six different wearable eye trackers. We found that the validation procedure can be administered within a minute per participant and provides measures of accuracy and precision. Calculating the eye-tracking data quality measures can be done offline on a simple computer and requires no advanced computer skills

    Snacking for a reason: detangling effects of socio-economic position and stress on snacking behaviour

    Get PDF
    Background: As snacking can be considered a cornerstone of an unhealthy diet, investigating psychological drivers of snacking behaviour is urgent, and therefore the purpose of this study. Socio-economic position (SEP) and stress are known to affect many behaviours and outcomes, and were therefore focal points in the study. Methods: In a cross-sectional survey study, we examined whether Socio-economic position (SEP) would amplify associations between heightened stress levels and self-reported negative-affect related reasons for snacking. Next, we investigated whether Socio-economic position (SEP) predicted frequency of snacking behaviour, and how stress and other reasons for snacking could explain this association. Outcome measures were reasons people indicated for snacking, and frequency of snacking behaviour. Results: Analyses revealed that people seem to find more reasons to snack when they are stressed, and that this association was more pronounced for people with a high compared to low socio-economic position. Furthermore, a higher socio-economic position was associated with a higher frequency of snacking, and both snacking to reward oneself and snacking because of the opportunity to do so remained significant mediators. Conclusion: Whereas low socio-economic position was associated with higher stress levels, this did not translate into increased snacking. Contrarily, those with higher socio-economic position could be more prone to using ‘reasons to snack’, which may result in justification of unhealthy snacking behaviour

    Multi-target visual search organisation across the lifespan:cancellation task performance in a large and demographically stratified sample of healthy adults

    Get PDF
    Accurate tests of cognition are vital in (neuro)psychology. Cancellation tasks are popular tests of attention and executive function, in which participants find and 'cancel' targets among distractors. Despite extensive use in neurological patients, it remains unclear whether demographic variables (that vary among patients) affect cancellation performance. Here, we describe performance in 523 healthy participants of a web-based cancellation task. Age, sex, and level of education did not affect cancellation performance in this sample. We provide norm scores for indices of spatial bias, perseverations, revisits, processing speed, and search organisation. Furthermore, a cluster analysis identified four cognitive profiles among participants, characterised by many omissions (N=18), many revisits (N=18), relatively poor search organisation (N=125), and relatively good search organisation (N=362). Thus, patient scores pertaining to search organisation should be interpreted cautiously: Given the large proportion of healthy individuals with poor search organisation, disorganised search in patients might be pre-existing rather than disorder-related

    Task-related gaze behaviour in face-to-face dyadic collaboration: Toward an interactive theory?

    Get PDF
    Visual routines theory posits that vision is critical for guiding sequential actions in the world. Most studies on the link between vision and sequential action have considered individual agents, while substantial human behaviour is characterized by multi-party interaction. Here, the actions of each person may affect what the other can subsequently do. We investigated task execution and gaze allocation of 19 dyads completing a Duplo-model copying task together, while wearing the Pupil Invisible eye tracker. We varied whether all blocks were visible to both participants, and whether verbal communication was allowed. For models in which not all blocks were visible, participants seemed to coordinate their gaze: The distance between the participants' gaze positions was smaller and dyads looked longer at the model concurrently than for models in which all blocks were visible. This was most pronounced when verbal communication was allowed. We conclude that the way the collaborative task was executed depended both on whether visual information was available to both persons, and how communication took place. Modelling task structure and gaze allocation for human-human and human-robot collaboration thus requires more than the observable behaviour of either individual. We discuss whether an interactive visual routines theory ought to be pursued

    “Keep your distance for me”: A field experiment on empathy prompts to promote distancing during the COVID-19 pandemic

    Get PDF
    The outbreak of COVID-19 has turned out to be a major challenge to societies all over the globe. Curbing the pandemic requires rapid and extensive behavioural change to limit social interaction, including physical distancing. In this study, we tested the notion that inducing empathy for people vulnerable to the virus may result in actual distancing behaviour beyond the mere motivation to do so. In a large field experiment with a sequential case–control design, we found that (a) empathy prompts may increase distancing as assessed by camera recordings and (b) effectiveness of prompts depends on the dynamics of the pandemic and associated public health policies. In sum, the present study demonstrates the potential of empathy-generating interventions to promote pro-social behaviour and emphasizes the necessity of field experiments to assess the role of context before advising policy makers to implement measures derived from behavioural science. Please refer to Supplementary Material to find this article's Community and Social Impact Statement

    Деякі аспекти діяльності уповноважених Наркомату (Міністерства) заготівель СРСР на Кіровоградщині в 1944-1946 рр. та їх наслідки

    Get PDF
    У статті на основі аналізу архівних документів висвітлена діяльність уповноважених Наркомату (Міністерства) заготівель СРСР на Кіровоградщині у перші післявоєнні роки, вказані наслідки, спричинені цією діяльністю - тотальне зубожіння населення області через вилучення майже всіх продуктів харчування.В статье на основе анализа архивных документов освещена деятельность уполномоченных Наркомата (Министерства) заготовок СССР на Кировоградщине в первые послевоенные годы, указаны последствия, причиненные этой деятельностью - тотальное обнищание населения области посредством изъятия почти всех продуктов питания.The activity of authorized people of the Ministry of Supply of the USSR in Kirovograd region in the first post-war years had been analyzed in the article on the basis of analysis of the archival documents and the consequences caused by this activity like the total impoverishment of population of the region through the confiscation of almost all food stuff had been indicated in this article as well

    A search asymmetry for interocular conflict

    Get PDF
    When two different images are presented to the two eyes, the percept will alternate between the images (a phenomenon called binocular rivalry). In the present study, we investigate the degree to which such interocular conflict is conspicuous. By using a visual search task, we show that search for interocular conflict is near efficient (15 ms/item) and can lead to a search asymmetry, depending on the contrast in the display. We reconcile our findings with those of Wolfe and Franzel (1988), who reported inefficient search for interocular conflict (26 ms/item) and found no evidence for a search asymmetry. In addition, we provide evidence for the suggestion that differences in search for interocular conflict are contingent on the degree of abnormal fusion of the dissimilar images

    Eye contact avoidance in crowds: A large wearable eye-tracking study

    Get PDF
    Eye contact is essential for human interactions. We investigated whether humans are able to avoid eye contact while navigating crowds. At a science festival, we fitted 62 participants with a wearable eye tracker and instructed them to walk a route. Half of the participants were further instructed to avoid eye contact. We report that humans can flexibly allocate their gaze while navigating crowds and avoid eye contact primarily by orienting their head and eyes towards the floor. We discuss implications for crowd navigation and gaze behavior. In addition, we address a number of issues encountered in such field studies with regard to data quality, control of the environment, and participant adherence to instructions. We stress that methodological innovation and scientific progress are strongly interrelated

    A Validation of Automatically-Generated Areas-of-Interest in Videos of a Face for Eye-Tracking Research

    No full text
    When mapping eye-movement behavior to the visual information presented to an observer, Areas of Interest (AOIs) are commonly employed. For static stimuli (screen without moving elements), this requires that one AOI set is constructed for each stimulus, a possibility in most eye-tracker manufacturers' software. For moving stimuli (screens with moving elements), however, it is often a time-consuming process, as AOIs have to be constructed for each video frame. A popular use-case for such moving AOIs is to study gaze behavior to moving faces. Although it is technically possible to construct AOIs automatically, the standard in this field is still manual AOI construction. This is likely due to the fact that automatic AOI-construction methods are (1) technically complex, or (2) not effective enough for empirical research. To aid researchers in this field, we present and validate a method that automatically achieves AOI construction for videos containing a face. The fully-automatic method uses an open-source toolbox for facial landmark detection, and a Voronoi-based AOI-construction method. We compared the position of AOIs obtained using our new method, and the eye-tracking measures derived from it, to a recently published semi-automatic method. The differences between the two methods were negligible. The presented method is therefore both effective (as effective as previous methods), and efficient; no researcher time is needed for AOI construction. The software is freely available from https://osf.io/zgmch/

    GazeCode : Open-source software for manual mapping of mobile eye-tracking data

    No full text
    Purpose: Eye movements recorded with mobile eye trackers generally have to be mapped to the visual stimulus manually. Manufacturer software usually has sub-optimal user interfaces. Here, we compare our in-house developed open-source alternative to the manufacturer software, called GazeCode. Method: 330 seconds of eye movements were recorded with the Tobii Pro Glasses 2. Eight coders subsequently categorized fixations using both Tobii Pro Lab and GazeCode. Results: Average manual mapping speed was more than two times faster when using GazeCode (0.649 events/s) compared with Tobii Pro Lab (0.292 events/s). Inter-rater reliability (Cohen’s Kappa) was similar and satisfactory; 0.886 for Tobii Pro Lab and 0.871 for GazeCode. Conclusion: GazeCode is a faster alternative to Tobii Pro Lab for mapping eye movements to the visual stimulus. Moreover, it accepts eye-tracking data from manufacturers SMI, Positive Science, Tobii, and Pupil Labs
    corecore