6,119 research outputs found

    O foco da atenção visual em pessoas com deficiência motora através do Eye tracking: uma experiência em ambiente construído público

    Get PDF
    Obter um ambiente construído acessível a todos, incluindo as pessoas com mobilidade reduzida, que ofereça conforto e permita realizar os deslocamentos com segurança é uma necessidade cada vez mais importante para os profissionais. Na procura de aplicar de novas tecnologias que visem implementar os princípios do Desenho Universal, identificasse o Eye Tracking como uma ferramenta que permite conhecer a percepção do usuário e auxiliar os profissionais nos processos de tomada de decisão. Sendo o Eye Tracking uma tecnologia assistiva que permite identificar objetivamente a percepção visual, realizou-se uma experiência que permite analisar as dificuldades na identificação visual interna das edificações. O objetivo deste artigo é identificar o foco de atenção visual em pessoas com deficiência motora usando o eye tracking. Para realizar a experiência utilizaram-se óculos do eye tracking da SensoMotoric Instruments (SMI) e analisam-se os dados com o software BeGaze versão 3.6, com um cadeirante e um usuário de prótese na perna.  Os resultados indicam que a ausência de informação visual dificulta que as pessoas localizem e identifiquem a rota correta para o deslocamento dentro de um edifício, e o uso de tecnologias assistivas diminuem a subjetividade na tomada de decisões para tonar os ambientes acessíveis.  As análises mostram que os participantes não fixaram o olhar em pontos específicos, pois permaneciam procurando a informação visual no prédio, condição que gerou falta de orientação e dificuldades para definir a rota certa no deslocamento. Em esta atividade foi possível validar uma aplicação do equipamento para contribuir na tomada de decisão dos professionais para tonar os ambientes acessíveis. Além disso, reconheceram-se as particularidades no uso da Tecnologia Assistiva, os óculos eye tracker, e a possibilidade de serem usados na análise de diversas tarefas contribuindo no Design, no projeto de Arquitetura e na Engenharia.Make the environment that can be achieved, fires, used and experienced by anyone, including those with reduced mobility, is an increasingly important need for professionals. Being the eye tracking is an assistive technology that enables you to identify objectively the visual perception was held an experiment that allows analyzing the people’s difficulties in internal visual identification on buildings. The article goal is to identify the focus of visual attention in people with motor disabilities using eye tracking glasses. To perform the experiment was used Senso Motoric Instruments (SMI) eye tracking glasses and was did analyses with the BeGaze software version 3.6.  The results indicate the lack of visual information causes difficulties for people to locate and identify the correct route for the offset inside a building, reducing the subjectivity in making decisions to make accessible environments.  The tests show that the participants do not have fixed their gaze on specific points, because it remained looking for visual information into the building generating lack of orientation and difficulties to define the right route at offset. With this experiment was possible to validate an application of the device to contribute to the decision-making process of professionals to make accessible environments. In addition, they recognized the particularities in the use of Assistive Technology, the glasses eye tracker, and the possibility of being used in the analysis of various tasks contributing in the Design, in the Architecture, and the Engineering

    Representative Scanpath Identification for Group Viewing Pattern Analysis

    Get PDF
    Scanpaths are composed of fixations and saccades. Viewing trends reflected by scanpaths play an important role in scientific studies like saccadic model evaluation and real-life applications like artistic design. Several scanpath synthesis methods have been proposed to obtain a scanpath that is representative of the group viewing trend. But most of them either target a specific category of viewing materials like webpages or leave out some useful information like gaze duration. Our previous work defined the representative scanpath as the barycenter of a group of scanpaths, which actually shows the averaged shape of multiple scanpaths. In this paper, we extend our previous framework to take gaze duration into account, obtaining representative scanpaths that describe not only attention distribution and shift but also attention span. The extended framework consists of three steps: Eye-gaze data preprocessing, scanpath aggregation and gaze duration analysis. Experiments demonstrate that the framework can well serve the purpose of mining viewing patterns and “barycenter” based representative scanpaths can better characterize the pattern

    Spatial perception of landmarks assessed by objective tracking of people and space syntax techniques

    Get PDF
    This paper focuses on space perception and how visual cues, such as landmarks, may influence the way people move in a given space. Our main goal with this research is to compare people’s movement in the real world with their movement in a replicated virtual world and study how landmarks influence their choices when deciding among different paths. The studied area was a university campus and three spatial analysis techniques were used: space syntax; an analysis of a Real Environment (RE) experiment; and an analysis of a Virtual Reality (VR) environment replicating the real experiment. The outcome data was compared and analysed in terms of finding the similarities and differences, between the observed motion flows in both RE and VR and also with the flows predicted by space syntax analysis. We found a statistically significant positive correlation between the real and virtual experiments, considering the number of passages in each segment line and considering fixations and saccades at the identified landmarks (with higher visual Integration). A statistically significant positive correlation, was also found between both RE and VR and syntactic measures. The obtained data enabled us to conclude that: i) the level of visual importance of landmarks, given by visual integration, can be captured by eye tracking data ii) our virtual environment setup is able to simulate the real world, when performing experiments on spatial perception.info:eu-repo/semantics/publishedVersio

    Cognitive Restoration in Children Following Exposure to Nature: Evidence From the Attention Network Task and Mobile Eye Tracking

    Get PDF
    Exposure to nature improves cognitive performance through a process of cognitive restoration. However, few studies have explored the effect in children, and no studies have explored how eye movements “in the wild” with mobile eye tracking technology contribute to the restoration process. Our results demonstrated that just a 30-min walk in a natural environment was sufficient to produce a faster and more stable pattern of responding on the Attention Network Task, compared with an urban environment. Exposure to the natural environment did not improve executive (directed) attention performance. This pattern of results supports suggestions that children and adults experience unique cognitive benefits from nature. Further, we provide the first evidence of a link between cognitive restoration and the allocation of eye gaze. Participants wearing a mobile eye-tracker exhibited higher fixation rates while walking in the natural environment compared to the urban environment. The data go some way in uncovering the mechanisms sub-serving the restoration effect in children and elaborate how nature may counteract the effects of mental fatigue

    Computer vision tools for the non-invasive assessment of autism-related behavioral markers

    Get PDF
    The early detection of developmental disorders is key to child outcome, allowing interventions to be initiated that promote development and improve prognosis. Research on autism spectrum disorder (ASD) suggests behavioral markers can be observed late in the first year of life. Many of these studies involved extensive frame-by-frame video observation and analysis of a child's natural behavior. Although non-intrusive, these methods are extremely time-intensive and require a high level of observer training; thus, they are impractical for clinical and large population research purposes. Diagnostic measures for ASD are available for infants but are only accurate when used by specialists experienced in early diagnosis. This work is a first milestone in a long-term multidisciplinary project that aims at helping clinicians and general practitioners accomplish this early detection/measurement task automatically. We focus on providing computer vision tools to measure and identify ASD behavioral markers based on components of the Autism Observation Scale for Infants (AOSI). In particular, we develop algorithms to measure three critical AOSI activities that assess visual attention. We augment these AOSI activities with an additional test that analyzes asymmetrical patterns in unsupported gait. The first set of algorithms involves assessing head motion by tracking facial features, while the gait analysis relies on joint foreground segmentation and 2D body pose estimation in video. We show results that provide insightful knowledge to augment the clinician's behavioral observations obtained from real in-clinic assessments

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    You Can't Hide Behind Your Headset: User Profiling in Augmented and Virtual Reality

    Full text link
    Virtual and Augmented Reality (VR, AR) are increasingly gaining traction thanks to their technical advancement and the need for remote connections, recently accentuated by the pandemic. Remote surgery, telerobotics, and virtual offices are only some examples of their successes. As users interact with VR/AR, they generate extensive behavioral data usually leveraged for measuring human behavior. However, little is known about how this data can be used for other purposes. In this work, we demonstrate the feasibility of user profiling in two different use-cases of virtual technologies: AR everyday application (N=34N=34) and VR robot teleoperation (N=35N=35). Specifically, we leverage machine learning to identify users and infer their individual attributes (i.e., age, gender). By monitoring users' head, controller, and eye movements, we investigate the ease of profiling on several tasks (e.g., walking, looking, typing) under different mental loads. Our contribution gives significant insights into user profiling in virtual environments
    corecore