60,264 research outputs found

    A methodological framework for capturing relative eyetracking coordinate data to determine gaze patterns and fixations from two or more observers

    Get PDF
    While physical activity during cancer treatment is found beneficial for breast cancer patients, evidence indicates ambiguous findings concerning effects of scheduled exercise programs on treatment-related symptoms. This study investigated effects of a scheduled home-based exercise intervention in breast cancer patients during adjuvant chemotherapy, on cancer-related fatigue, physical fitness, and activity level. Sixty-seven women were randomized to an exercise intervention group (n=33, performed strength training 3x/week and 30 minutes brisk walking/day) and a control group (n=34, performed their regular physical activity level). Data collection was performed at baseline, at completion of chemotherapy (Post1), and 6-month postchemotherapy (Post2). Exercise levels were slightly higher in the scheduled exercise group than in the control group. In both groups, cancer-related fatigue increased at Post1 but returned to baseline at Post2. Physical fitness and activity levels decreased at Post1 but were significantly improved at Post2. Significant differences between intervention and control groups were not found. The findings suggest that generally recommended physical activity levels are enough to relief cancer-related fatigue and restore physical capacity in breast cancer patients during adjuvant chemotherapy, although one cannot rule out that results reflect diminishing treatment side effects over time

    Veterinary student competence in equine lameness recognition and assessment: a mixed methods study

    Get PDF
    The development of perceptual skills is an important aspect of veterinary education. The authors investigated veterinary student competency in lameness evaluation at two stages, before (third year) and during (fourth/fifth year) clinical rotations. Students evaluated horses in videos, where horses were presented during trot on a straight line and in circles. Eye-tracking data were recorded during assessment on the straight line to follow student gaze. On completing the task, students filled in a structured questionnaire. Results showed that the experienced students outperformed inexperienced students, although even experienced students may classify one in four horses incorrectly. Mistakes largely arose from classifying an incorrect limb as lame. The correct detection of sound horses was at chance level. While the experienced student cohort primarily looked at upper body movement (head and sacrum) during lameness assessment, the inexperienced cohort focused on limb movement. Student self-assessment of performance was realistic, and task difficulty was most commonly rated between 3 and 4 out of 5. The inexperienced students named a considerably greater number of visual lameness features than the experienced students. Future dedicated training based on the findings presented here may help students to develop more reliable lameness assessment skills

    Development of a head-mounted, eye-tracking system for dogs

    Get PDF
    Growing interest in canine cognition and visual perception has promoted research into the allocation of visual attention during free-viewing tasks in the dog. The techniques currently available to study this (i.e. preferential looking) have, however, lacked spatial accuracy, permitting only gross judgements of the location of the dog’s point of gaze and are limited to a laboratory setting. Here we describe a mobile, head-mounted, video-based, eye-tracking system and a procedure for achieving standardised calibration allowing an output with accuracy of 2-3º. The setup allows free movement of dogs; in addition the procedure does not involve extensive training skills, and is completely non-invasive. This apparatus has the potential to allow the study of gaze patterns in a variety of research applications and could enhance the study of areas such as canine vision, cognition and social interactions

    Animated virtual agents to cue user attention: comparison of static and dynamic deictic cues on gaze and touch responses

    Get PDF
    This paper describes an experiment developed to study the performance of virtual agent animated cues within digital interfaces. Increasingly, agents are used in virtual environments as part of the branding process and to guide user interaction. However, the level of agent detail required to establish and enhance efficient allocation of attention remains unclear. Although complex agent motion is now possible, it is costly to implement and so should only be routinely implemented if a clear benefit can be shown. Pevious methods of assessing the effect of gaze-cueing as a solution to scene complexity have relied principally on two-dimensional static scenes and manual peripheral inputs. Two experiments were run to address the question of agent cues on human-computer interfaces. Both experiments measured the efficiency of agent cues analyzing participant responses either by gaze or by touch respectively. In the first experiment, an eye-movement recorder was used to directly assess the immediate overt allocation of attention by capturing the participant’s eyefixations following presentation of a cueing stimulus. We found that a fully animated agent could speed up user interaction with the interface. When user attention was directed using a fully animated agent cue, users responded 35% faster when compared with stepped 2-image agent cues, and 42% faster when compared with a static 1-image cue. The second experiment recorded participant responses on a touch screen using same agent cues. Analysis of touch inputs confirmed the results of gaze-experiment, where fully animated agent made shortest time response with a slight decrease on the time difference comparisons. Responses to fully animated agent were 17% and 20% faster when compared with 2-image and 1-image cue severally. These results inform techniques aimed at engaging users’ attention in complex scenes such as computer games and digital transactions within public or social interaction contexts by demonstrating the benefits of dynamic gaze and head cueing directly on the users’ eye movements and touch responses

    The kindest cut: Enhancing the user experience of mobile tv through adequate zooming

    Get PDF
    The growing market of Mobile TV requires automated adaptation of standard TV footage to small size displays. Especially extreme long shots (XLS) depicting distant objects can spoil the user experience, e.g. in soccer content. Automated zooming schemes can improve the visual experience if the resulting footage meets user expectations in terms of the visual detail and quality but does not omit valuable context information. Current zooming schemes are ignorant of beneficial zoom ranges for a given target size when applied to standard definition TV footage. In two experiments 84 participants were able to switch between original and zoom enhanced soccer footage at three sizes - from 320x240 (QVGA) down to 176x144 (QCIF). Eye tracking and subjective ratings showed that zoom factors between 1.14 and 1.33 were preferred for all sizes. Interviews revealed that a zoom factor of 1.6 was too high for QVGA content due to low perceived video quality, but beneficial for QCIF size. The optimal zoom depended on the target display size. We include a function to compute the optimal zoom for XLS depending on the target device size. It can be applied in automatic content adaptation schemes and should stimulate further research on the requirements of different shot types in video coding

    Dynamic Facial Expression of Emotion Made Easy

    Full text link
    Facial emotion expression for virtual characters is used in a wide variety of areas. Often, the primary reason to use emotion expression is not to study emotion expression generation per se, but to use emotion expression in an application or research project. What is then needed is an easy to use and flexible, but also validated mechanism to do so. In this report we present such a mechanism. It enables developers to build virtual characters with dynamic affective facial expressions. The mechanism is based on Facial Action Coding. It is easy to implement, and code is available for download. To show the validity of the expressions generated with the mechanism we tested the recognition accuracy for 6 basic emotions (joy, anger, sadness, surprise, disgust, fear) and 4 blend emotions (enthusiastic, furious, frustrated, and evil). Additionally we investigated the effect of VC distance (z-coordinate), the effect of the VC's face morphology (male vs. female), the effect of a lateral versus a frontal presentation of the expression, and the effect of intensity of the expression. Participants (n=19, Western and Asian subjects) rated the intensity of each expression for each condition (within subject setup) in a non forced choice manner. All of the basic emotions were uniquely perceived as such. Further, the blends and confusion details of basic emotions are compatible with findings in psychology

    The influence of classical-conditioning procedures on subsequent attention to the conditioned brand.

    Get PDF
    Three experiments are used to investigate the influence of conditioning procedures on attention to a conditioned stimulus. In experiment 1, scenes presented in a sequence that is consistent with prescribed conditioning procedures are shown to encourage attention to the advertised brands in subsequent product displays. Experiment 2 suggests that differential attention to conditioned brands can be attributed to the signaling properties the brand acquires as a consequence of conditioning. Evidence from a third experiment raises the possibility that semantic conditioning may be responsible for the effects observed in experiments 1 and 2. The findings suggest that current prescriptions on the use of conditioning procedures may need to be updated.contingency awareness; orienting response; external validity; consumer research; context; stimulus; recall;
    • …
    corecore