71 research outputs found

    Social presence and dishonesty in retail

    Get PDF
    Self-service checkouts (SCOs) in retail can benefit consumers and retailers, providing control and autonomy to shoppers independent from staff, together with reduced queuing times. Recent research indicates that the absence of staff may provide the opportunity for consumers to behave dishonestly, consistent with a perceived lack of social presence. This study examined whether a social presence in the form of various instantiations of embodied, visual, humanlike SCO interface agents had an effect on opportunistic behaviour. Using a simulated SCO scenario, participants experienced various dilemmas in which they could financially benefit themselves undeservedly. We hypothesised that a humanlike social presence integrated within the checkout screen would receive more attention and result in fewer instances of dishonesty compared to a less humanlike agent. This was partially supported by the results. The findings contribute to the theoretical framework in social presence research. We concluded that companies adopting self-service technology may consider the implementation of social presence in technology applications to support ethical consumer behaviour, but that more research is required to explore the mixed findings in the current study.<br/

    The development of face orienting mechanisms in infants at-risk for autism

    Get PDF
    A popular idea related to early brain development in autism is that a lack of attention to, or interest in, social stimuli early in life interferes with the emergence of social brain networks mediating the typical development of socio-communicative skills. Compelling as it is, this developmental account has proved difficult to verify empirically because autism is typically diagnosed in toddlerhood, after this process of brain specialization is well underway. Using a prospective study, we directly tested the integrity of social orienting mechanisms in infants at-risk for autism by virtue of having an older diagnosed sibling. Contrary to previous accounts, infants who later develop autism exhibit a clear orienting response to faces that are embedded within an array of distractors. Nevertheless, infants at-risk for autism as a group, and irrespective of their subsequent outcomes, had a greater tendency to select and sustain attention to faces. This pattern suggests that interactions among multiple social and attentional brain systems over the first two years give rise to variable pathways in infants at-risk

    Does oculomotor inhibition of return influence fixation probability during scene search?

    Get PDF
    Oculomotor inhibition of return (IOR) is believed to facilitate scene scanning by decreasing the probability that gaze will return to a previously fixated location. This “foraging” hypothesis was tested during scene search and in response to sudden-onset probes at the immediately previous (one-back) fixation location. The latencies of saccades landing within 1º of the previous fixation location were elevated, consistent with oculomotor IOR. However, there was no decrease in the likelihood that the previous location would be fixated relative to distance-matched controls or an a priori baseline. Saccades exhibit an overall forward bias, but this is due to a general bias to move in the same direction and for the same distance as the last saccade (saccadic momentum) rather than to a spatially specific tendency to avoid previously fixated locations. We find no evidence that oculomotor IOR has a significant impact on return probability during scene search

    Scenes, saliency maps and scanpaths

    Get PDF
    The aim of this chapter is to review some of the key research investigating how people look at pictures. In particular, my goal is to provide theoretical background for those that are new to the field, while also explaining some of the relevant methods and analyses. I begin by introducing eye movements in the context of natural scene perception. As in other complex tasks, eye movements provide a measure of attention and information processing over time, and they tell us about how the foveated visual system determines what to prioritise. I then describe some of the many measures which have been derived to summarize where people look in complex images. These include global measures, analyses based on regions of interest and comparisons based on heat maps. A particularly popular approach for trying to explain fixation locations is the saliency map approach, and the first half of the chapter is mostly devoted to this topic. A large number of papers and models are built on this approach, but it is also worth spending time on this topic because the methods involved have been used across a wide range of applications. The saliency map approach is based on the fact that the visual system has topographic maps of visual features, that contrast within these features seems to be represented and prioritized, and that a central representation can be used to control attention and eye movements. This approach, and the underlying principles, has led to an increase in the number of researchers using complex natural scenes as stimuli. It is therefore important that those new to the field are familiar with saliency maps, their usage, and their pitfalls. I describe the original implementation of this approach (Itti & Koch, 2000), which uses spatial filtering at different levels of coarseness and combines them in an attempt to identify the regions which stand out from their background. Evaluating this model requires comparing fixation locations to model predictions. Several different experimental and comparison methods have been used, but most recent research shows that bottom-up guidance is rather limited in terms of predicting real eye movements. The second part of the chapter is largely concerned with measuring eye movement scanpaths. Scanpaths are the sequential patterns of fixations and saccades made when looking at something for a period of time. They show regularities which may reflect top-down attention, and some have attempted to link these to memory and an individual’s mental model of what they are looking at. While not all researchers will be testing hypotheses about scanpaths, an understanding of the underlying methods and theory will be of benefit to all. I describe the theories behind analyzing eye movements in this way, and various methods which have been used to represent and compare them. These methods allow one to quantify the similarity between two viewing patterns, and this similarity is linked to both the image and the observer. The last part of the chapter describes some applications of eye movements in image viewing. The methods discussed can be applied to complex images, and therefore these experiments can tell us about perception in art and marketing, as well as about machine vision

    Fixation durations in scene viewing:Modeling the effects of local image features, oculomotor parameters, and task

    Get PDF
    Scene perception requires the orchestration of image- and task-related processes with oculomotor constraints. The present study was designed to investigate how these factors influence how long the eyes remain fixated on a given location. Linear mixed models (LMMs) were used to test whether local image statistics (including luminance, luminance contrast, edge density, visual clutter, and the number of homogeneous segments), calculated for 1° circular regions around fixation locations, modulate fixation durations, and how these effects depend on task-related control. Fixation durations and locations were recorded from 72 participants, each viewing 135 scenes under three different viewing instructions (memorization, preference judgment, and search). Along with the image-related predictors, the LMMs simultaneously considered a number of oculomotor and spatiotemporal covariates, including the amplitudes of the previous and next saccades, and viewing time. As a key finding, the local image features around the current fixation predicted this fixation’s duration. For instance, greater luminance was associated with shorter fixation durations. Such immediacy effects were found for all three viewing tasks. Moreover, in the memorization and preference tasks, some evidence for successor effects emerged, such that some image characteristics of the upcoming location influenced how long the eyes stayed at the current location. In contrast, in the search task, scene processing was not distributed across fixation durations within the visual span. The LMM-based framework of analysis, applied to the control of fixation durations in scenes, suggests important constraints for models of scene perception and search, and for visual attention in general
    corecore