707 research outputs found

    A Computer Vision and Maps Aided Tool for Campus Navigation

    Get PDF
    Current study abroad trips rely on students utilizing GPS directions and digital maps for navigation. While GPS-based navigation may be more straightforward and easier for some to use than traditional paper maps, studies have shown that GPS-based navigation may be associated with disengagement with the environment, hindering the development of spatial knowledge and development of a mental representation or cognitive map of the area. If one of the outcomes of a study abroad trip is not only to navigate to the location, but also to learn about important features such as urban configurations and architectural style, then there needs to be a better solution than students only following GPS directions. This research introduces one such explored solution being a new feature within wayfinding mobile applications that emphasizes engagement with landmarks during navigation. This feature, powered by computer vision, was integrated into a newly developed wayfinding mobile application, and allows one to take pictures of various Texas A&M University buildings and retrieve information about them. Following the development of the mobile application, a user study was conducted to determine the effects of the presence or absence of this building recognition feature and GPS-based navigation on spatial cognition and cognitive mapping performance. Additionally, the study explores the wayfinding accuracy performance of the building recognition feature and GPS-based navigation compared with traditional paper maps. This paper includes preliminary results where it was found that groups without GPS-based navigation took longer routes to find destinations than those with GPS-based navigation. It was also found that cognitive mapping performance improved for all participants when identifying destination buildings. Final data collection and analysis is planned for April 2022

    Comparing written and photo-based indoor wayfinding instructions through eye fixation measures and user ratings as mental effort assessments

    Get PDF
    The use of mobile pedestrian wayfinding applications is gaining importance indoors. However, compared to outdoors, much less research has been conducted with respect to the most adequate ways to convey indoor wayfinding information to a user. An explorative study was conducted to compare two pedestrian indoor wayfinding applications, one text-based (Sole-Way) and one image-based (Eyedog), in terms of mental effort. To do this, eye tracking data and mental effort ratings were collected from 29 participants during two routes in an indoor environment. The results show that both textual instructions and photographs can enable a navigator to find his/her way while experiencing no or very little cognitive effort or difficulties. However, these instructions must be in line with a user's expectations of the route, which are based on his/her interpretation of the indoor environment at decision points. In this case, textual instructions offer the advantage that specific information can be explicitly and concisely shared with the user. Furthermore, the study drew attention to potential usability issues of the wayfinding aids (e.g. the incentive to swipe) and, as such, demonstrated the value of eye tracking and mental effort assessments in usability research

    Finding Your Way Back: Comparing Path Odometry Algorithms for Assisted Return.

    Get PDF
    We present a comparative analysis of inertial-based odometry algorithms for the purpose of assisted return. An assisted return system facilitates backtracking of a path previously taken, and can be particularly useful for blind pedestrians. We present a new algorithm for path matching, and test it in simulated assisted return tasks with data from WeAllWalk, the only existing data set with inertial data recorded from blind walkers. We consider two odometry systems, one based on deep learning (RoNIN), and the second based on robust turn detection and step counting. Our results show that the best path matching results are obtained using the turns/steps odometry system

    SLAM for Visually Impaired People: A Survey

    Full text link
    In recent decades, several assistive technologies for visually impaired and blind (VIB) people have been developed to improve their ability to navigate independently and safely. At the same time, simultaneous localization and mapping (SLAM) techniques have become sufficiently robust and efficient to be adopted in the development of assistive technologies. In this paper, we first report the results of an anonymous survey conducted with VIB people to understand their experience and needs; we focus on digital assistive technologies that help them with indoor and outdoor navigation. Then, we present a literature review of assistive technologies based on SLAM. We discuss proposed approaches and indicate their pros and cons. We conclude by presenting future opportunities and challenges in this domain.Comment: 26 pages, 5 tables, 3 figure

    Human spatial navigation in the digital era: Effects of landmark depiction on mobile maps on navigators’ spatial learning and brain activity during assisted navigation

    Full text link
    Navigation was an essential survival skill for our ancestors and is still a fundamental activity in our everyday lives. To stay oriented and assist navigation, our ancestors had a long history of developing and employing physical maps that communicated an enormous amount of spatial and visual information about their surroundings. Today, in the digital era, we are increasingly turning to mobile navigation devices to ease daily navigation tasks, surrendering our spatial and navigational skills to the hand-held device. On the flip side, the conveniences of such devices lead us to pay less attention to our surroundings, make fewer spatial decisions, and remember less about the surroundings we have traversed. As navigational skills and spatial memory are related to adult neurogenesis, healthy aging, education, and survival, scientists and researchers from multidisciplinary fields have made calls to develop a new account of mobile navigation assistance to preserve human navigational abilities and spatial memory. Landmarks have been advocated for special attention in developing cognitively supportive navigation systems, as landmarks are widely accepted as key features to support spatial navigation and spatial learning of an environment. Turn-by-turn direction instructions without reference to surrounding landmarks, such as those provided by most existing navigation systems, can be one of the reasons for navigators’ spatial memory deterioration during assisted navigation. Despite the benefit of landmarks in navigation and spatial learning, long-standing literature on cognitive psychology has pointed out that individuals have only a limited cognitive capacity to process presented information for a task. When the learning items exceed learners’ capacity, the performance may reach a plateau or even drop. This leads to an unexamined yet important research question on how to visualize landmarks on a mobile map to optimize navigators’ cognitive resource exertion and thus optimize their spatial learning. To investigate this question, I leveraged neuropsychological and hypothesis-driven approaches and investigated whether and how different numbers of landmarks depicted on a mobile map affected navigators’ spatial learning, cognitive load, and visuospatial encoding. Specifically, I set out a navigation experiment in three virtual urban environments, in which participants were asked to follow a given route to a specific destination with the aid of a mobile map. Three different numbers of landmarks—3, 5, and 7—along the given route were selected based on cognitive capacity literature and presented to 48 participants during map-assisted navigation. Their brain activity was recorded both during the phase of map consultation and during that of active locomotion. After navigation in each virtual city, their spatial knowledge of the traversed routes was assessed. The statistical results revealed that spatial learning improved when a medium number of landmarks (i.e., five) was depicted on a mobile map compared to the lowest evaluated number (i.e., three) of landmarks, and there was no further improvement when the highest number (i.e., seven) of landmarks were provided on the mobile map. The neural correlates that were interpreted to reflect cognitive load during map consultation increased when participants were processing seven landmarks depicted on a mobile map compared to the other two landmark conditions; by contrast, the neural correlates that indicated visuospatial encoding increased with a higher number of presented landmarks. In line with the cognitive load changes during map consultation, cognitive load during active locomotion also increased when participants were in the seven-landmark condition, compared to the other two landmark conditions. This thesis provides an exemplary paradigm to investigate navigators’ behavior and cognitive processing during map-assisted navigation and to utilize neuropsychological approaches to solve cartographic design problems. The findings contribute to a better understanding of the effects of landmark depiction (3, 5, and 7 landmarks) on navigators’ spatial learning outcomes and their cognitive processing (cognitive load and visuospatial encoding) during map-assisted navigation. Of these insights, I conclude with two main takeaways for audiences including navigation researchers and navigation system designers. First, the thesis suggests a boundary effect of the proposed benefits of landmarks in spatial learning: providing landmarks on maps benefits users’ spatial learning only to a certain extent when the number of landmarks does not increase cognitive load. Medium number (i.e., 5) of landmarks seems to be the best option in the current experiment, as five landmarks facilitate spatial learning without taxing additional cognitive resources. The second takeaway is that the increased cognitive load during map use might also spill over into the locomotion phase through the environment; thus, the locomotion phase in the environment should also be carefully considered while designing a mobile map to support navigation and environmental learning

    O foco da atenção visual em pessoas com deficiência motora através do Eye tracking: uma experiência em ambiente construído público

    Get PDF
    Obter um ambiente construído acessível a todos, incluindo as pessoas com mobilidade reduzida, que ofereça conforto e permita realizar os deslocamentos com segurança é uma necessidade cada vez mais importante para os profissionais. Na procura de aplicar de novas tecnologias que visem implementar os princípios do Desenho Universal, identificasse o Eye Tracking como uma ferramenta que permite conhecer a percepção do usuário e auxiliar os profissionais nos processos de tomada de decisão. Sendo o Eye Tracking uma tecnologia assistiva que permite identificar objetivamente a percepção visual, realizou-se uma experiência que permite analisar as dificuldades na identificação visual interna das edificações. O objetivo deste artigo é identificar o foco de atenção visual em pessoas com deficiência motora usando o eye tracking. Para realizar a experiência utilizaram-se óculos do eye tracking da SensoMotoric Instruments (SMI) e analisam-se os dados com o software BeGaze versão 3.6, com um cadeirante e um usuário de prótese na perna.  Os resultados indicam que a ausência de informação visual dificulta que as pessoas localizem e identifiquem a rota correta para o deslocamento dentro de um edifício, e o uso de tecnologias assistivas diminuem a subjetividade na tomada de decisões para tonar os ambientes acessíveis.  As análises mostram que os participantes não fixaram o olhar em pontos específicos, pois permaneciam procurando a informação visual no prédio, condição que gerou falta de orientação e dificuldades para definir a rota certa no deslocamento. Em esta atividade foi possível validar uma aplicação do equipamento para contribuir na tomada de decisão dos professionais para tonar os ambientes acessíveis. Além disso, reconheceram-se as particularidades no uso da Tecnologia Assistiva, os óculos eye tracker, e a possibilidade de serem usados na análise de diversas tarefas contribuindo no Design, no projeto de Arquitetura e na Engenharia.Make the environment that can be achieved, fires, used and experienced by anyone, including those with reduced mobility, is an increasingly important need for professionals. Being the eye tracking is an assistive technology that enables you to identify objectively the visual perception was held an experiment that allows analyzing the people’s difficulties in internal visual identification on buildings. The article goal is to identify the focus of visual attention in people with motor disabilities using eye tracking glasses. To perform the experiment was used Senso Motoric Instruments (SMI) eye tracking glasses and was did analyses with the BeGaze software version 3.6.  The results indicate the lack of visual information causes difficulties for people to locate and identify the correct route for the offset inside a building, reducing the subjectivity in making decisions to make accessible environments.  The tests show that the participants do not have fixed their gaze on specific points, because it remained looking for visual information into the building generating lack of orientation and difficulties to define the right route at offset. With this experiment was possible to validate an application of the device to contribute to the decision-making process of professionals to make accessible environments. In addition, they recognized the particularities in the use of Assistive Technology, the glasses eye tracker, and the possibility of being used in the analysis of various tasks contributing in the Design, in the Architecture, and the Engineering

    A context-sensitive conceptual framework for activity modeling

    Get PDF
    Human motion trajectories, however captured, provide a rich spatiotemporal data source for human activity recognition, and the rich literature in motion trajectory analysis provides the tools to bridge the gap between this data and its semantic interpretation. But activity is an ambiguous term across research communities. For example, in urban transport research activities are generally characterized around certain locations assuming the opportunities and resources are present in that location, and traveling happens between these locations for activity participation, i.e., travel is not an activity, rather a mean to overcome spatial constraints. In contrast, in human-computer interaction (HCI) research and in computer vision research activities taking place along the way, such as reading on the bus, are significant for contextualized service provision. Similarly activities at coarser spatial and temporal granularity, e.g., holidaying in a country, could be recognized in some context or domain. Thus the context prevalent in the literature does not provide a precise and consistent definition of activity, in particular in differentiation to travel when it comes to motion trajectory analysis. Hence in this paper, a thorough literature review studies activity from different perspectives, and develop a common framework to model and reason human behavior flexibly across contexts. This spatio-temporal framework is conceptualized with a focus on modeling activities hierarchically. Three case studies will illustrate how the semantics of the term activity changes based on scale and context. They provide evidence that the framework holds over different domains. In turn, the framework will help developing various applications and services that are aware of the broad spectrum of the term activity across contexts
    corecore