1,170 research outputs found

    Honesty, social presence, and self-service in retail

    Get PDF
    Retail self-service checkouts (SCOs) can benefit consumers and retailers, providing control and autonomy to shoppers independent from staff. Recent research indicates that the lack of presence of staff may provide the opportunity for consumers to behave dishonestly. This study examined whether a social presence in the form of visual, humanlike SCO interface agents had an effect on dishonest user behaviour. Using a simulated SCO scenario, participants experienced various dilemmas in which they could financially benefit themselves undeservedly. We hypothesised that a humanlike social presence integrated within the checkout screen would receive more attention and result in fewer instances of dishonesty compared to a less humanlike agent. Our hypotheses were partially supported by the results. We conclude that companies adopting self-service technology may consider the implementation of social presence to support ethical consumer behaviour, but that more research is required to explore the mixed findings in the current study

    Social relevance drives viewing behavior independent of low-level salience in rhesus macaques

    Get PDF
    Quantifying attention to social stimuli during the viewing of complex social scenes with eye tracking has proven to be a sensitive method in the diagnosis of autism spectrum disorders years before average clinical diagnosis. Rhesus macaques provide an ideal model for understanding the mechanisms underlying social viewing behavior, but to date no comparable behavioral task has been developed for use in monkeys. Using a novel scene-viewing task, we monitored the gaze of three rhesus macaques while they freely viewed well-controlled composed social scenes and analyzed the time spent viewing objects and monkeys. In each of six behavioral sessions, monkeys viewed a set of 90 images (540 unique scenes) with each image presented twice. In two-thirds of the repeated scenes, either a monkey or an object was replaced with a novel item (manipulated scenes). When viewing a repeated scene, monkeys made longer fixations and shorter saccades, shifting from a rapid orienting to global scene contents to a more local analysis of fewer items. In addition to this repetition effect, in manipulated scenes, monkeys demonstrated robust memory by spending more time viewing the replaced items. By analyzing attention to specific scene content, we found that monkeys strongly preferred to view conspecifics and that this was not related to their salience in terms of low-level image features. A model-free analysis of viewing statistics found that monkeys that were viewed earlier and longer had direct gaze and redder sex skin around their face and rump, two important visual social cues. These data provide a quantification of viewing strategy, memory and social preferences in rhesus macaques viewing complex social scenes, and they provide an important baseline with which to compare to the effects of therapeutics aimed at enhancing social cognition

    Requirements for Robotic Interpretation of Social Signals “in the Wild”: Insights from Diagnostic Criteria of Autism Spectrum Disorder

    Get PDF
    The last few decades have seen widespread advances in technological means to characterise observable aspects of human behaviour such as gaze or posture. Among others, these developments have also led to significant advances in social robotics. At the same time, however, social robots are still largely evaluated in idealised or laboratory conditions, and it remains unclear whether the technological progress is sufficient to let such robots move “into the wild”. In this paper, we characterise the problems that a social robot in the real world may face, and review the technological state of the art in terms of addressing these. We do this by considering what it would entail to automate the diagnosis of Autism Spectrum Disorder (ASD). Just as for social robotics, ASD diagnosis fundamentally requires the ability to characterise human behaviour from observable aspects. However, therapists provide clear criteria regarding what to look for. As such, ASD diagnosis is a situation that is both relevant to real-world social robotics and comes with clear metrics. Overall, we demonstrate that even with relatively clear therapist-provided criteria and current technological progress, the need to interpret covert behaviour cannot yet be fully addressed. Our discussions have clear implications for ASD diagnosis, but also for social robotics more generally. For ASD diagnosis, we provide a classification of criteria based on whether or not they depend on covert information and highlight present-day possibilities for supporting therapists in diagnosis through technological means. For social robotics, we highlight the fundamental role of covert behaviour, show that the current state-of-the-art is unable to characterise this, and emphasise that future research should tackle this explicitly in realistic settings

    Semantic content outweighs low-level saliency in determining children's and adults' fixation of movies

    Get PDF
    To make sense of the visual world, we need to move our eyes to focus regions of interest on the high-resolution fovea. Eye movements, therefore, give us a way to infer mechanisms of visual processing and attention allocation. Here, we examined age-related differences in visual processing by recording eye movements from 37 children (aged 6–14 years) and 10 adults while viewing three 5-min dynamic video clips taken from child-friendly movies. The data were analyzed in two complementary ways: (a) gaze based and (b) content based. First, similarity of scanpaths within and across age groups was examined using three different measures of variance (dispersion, clusters, and distance from center). Second, content-based models of fixation were compared to determine which of these provided the best account of our dynamic data. We found that the variance in eye movements decreased as a function of age, suggesting common attentional orienting. Comparison of the different models revealed that a model that relies on faces generally performed better than the other models tested, even for the youngest age group (<10 years). However, the best predictor of a given participant’s eye movements was the average of all other participants’ eye movements both within the same age group and in different age groups. These findings have implications for understanding how children attend to visual information and highlight similarities in viewing strategies across development

    Parsing eye-tracking data of variable quality to provide accurate fixation duration estimates in infants and adults

    Get PDF
    Researchers studying infants’ spontaneous allocation of attention have traditionally relied on hand-coding infants’ direction of gaze from videos; these techniques have low temporal and spatial resolution and are labor intensive. Eye-tracking technology potentially allows for much more precise measurement of how attention is allocated at the subsecond scale, but a number of technical and methodological issues have given rise to caution about the quality and reliability of high temporal resolution data obtained from infants. We present analyses suggesting that when standard dispersal-based fixation detection algorithms are used to parse eye-tracking data obtained from infants, the results appear to be heavily influenced by interindividual variations in data quality. We discuss the causes of these artifacts, including fragmentary fixations arising from flickery or unreliable contact with the eyetracker and variable degrees of imprecision in reported position of gaze. We also present new algorithms designed to cope with these problems by including a number of new post hoc verification checks to identify and eliminate fixations that may be artifactual. We assess the results of our algorithms by testing their reliability using a variety of methods and on several data sets. We contend that, with appropriate data analysis methods, fixation duration can be a reliable and stable measure in infants. We conclude by discussing ways in which studying fixation durations during unconstrained orienting may offer insights into the relationship between attention and learning in naturalistic settings

    Computational Methods for Measurement of Visual Attention from Videos towards Large-Scale Behavioral Analysis

    Get PDF
    Visual attention is one of the most important aspects of human social behavior, visual navigation, and interaction with the world, revealing information about their social, cognitive, and affective states. Although monitor-based and wearable eye trackers are widely available, they are not sufficient to support the large-scale collection of naturalistic gaze data in face-to-face social interactions or during interactions with 3D environments. Wearable eye trackers are burdensome to participants and bring issues of calibration, compliance, cost, and battery life. The ability to automatically measure attention from ordinary videos would deliver scalable, dense, and objective measurements to use in practice. This thesis investigates several computational methods to measure visual attention from videos using computer vision and its use for quantifying visual social cues such as eye contact and joint attention. Specifically, three methods are investigated. First, I present methods for detection of looks to camera in first-person view and its use for eye contact detection. Experimental results show that the presented method can achieve the first human expert-level detection of eye contact. Second, I develop a method for tracking heads in a 3d space for measuring attentional shifts. Lastly, I propose spatiotemporal deep neural networks for detecting time-varying attention targets in video and present its application for the detection of shared attention and joint attention. The method achieves state-of-the-art results on different benchmark datasets on attention measurement as well as the first empirical result on clinically-relevant gaze shift classification. Presented approaches have the benefit of linking gaze estimation to the broader tasks of action recognition and dynamic visual scene understanding, and bears potential as a useful tool for understanding attention in various contexts such as human social interactions, skill assessments, and human-robot interactions.Ph.D

    Manipulating Image Luminance to Improve Eye Gaze and Verbal Behavior in Autistic Children

    Get PDF
    Autism has been characterized by a tendency to attend to the local visual details over surveying an image to understand the gist–a phenomenon called local interference. This sensory processing trait has been found to negatively impact social communication. Although much work has been conducted to understand these traits, little to no work has been conducted to intervene to provide support for local interference. Additionally, recent understanding of autism now introduces the core role of sensory processing and its impact on social communication. However, no interventions to the end of our knowledge have been explored to leverage this relationship. This work builds on the connection between visual attention and semantic representation in autistic children. In this work, we ask the following research questions: RQ1: Does manipulating image characteristics of luminance and spatial frequency increase likelihood of fixations in hot spots (Areas of Interest) for autistic children? RQ2: Does manipulating low-level image characteristics of luminance and spatial frequency increase the likelihood of global verbal responses for autistic children? We sought to manipulate visual attention as measured by eye gaze fixations and semantic representation of verbal response to the question “What is this picture about?”. We explore digital strategies to offload low-level, sensory processing of global features via digital filtering. In this work, we designed a global filter to reduce image characteristics found to be distracting for autistic people and compared baseline images to featured images in 11 autistic children. Participants saw counterbalanced images way over 2 sessions. Eye gaze in areas of interest and verbal responses were collected and analyzed. We found that luminance in non-salient areas impacted both eye gaze and verbal responding–however in opposite ways (however versus high levels of luminance). Additionally, the interaction of luminance and spatial frequency in areas of interest is also significant. This is the first empirical study in designing an assistive technology aimed to augment global processing that occurs at a sensory-processing and social-communication level. Contributions of this work include empirical findings regarding the quantification of local interference in images of natural scenes for autistic children in real-world settings; digital methods to offload global visual processing to make this information more accessible via insight on the role of luminance and spatial frequency in visual perception of and semantic representation in images of natural scenes
    • …
    corecore