3,292 research outputs found

    Neuroadaptive mobile geographic information displays: an emerging cartographic research frontier

    Get PDF
    Mobility, including navigation and wayfinding, is a basic human requirement for survival. For thousands of years maps have played a significant role for human mobility and survival. Increasing reliance on digital GNSS-enabled navigation assistance, however, is impacting human attentional resources and is limiting our innate cognitive spatial abilities. To mitigate human de-skilling, a neuroadaptive (mobile) cartographic research frontier is proposed and first steps towards creating well-designed mobile geographic information displays (mGIDs) that not only respond to navigators’ cognitive load and visuo-spatial attentional resources during navigation in real-time but are also able to scaffold spatial learning while still maintaining navigation efficiency. This in turn, will help humans to remain as independent from geoinformation technology, as desired. La mobilité, dont la navigation et l'orientation, est un besoin humain fondamental pour la survie. Pendant des milliers d'années, les cartes analogiques ont joué un rôle significatif pour la mobilité humaine et sa survie. Pourtant, la dépendance grandissante vis-à-vis de l'assistance à la navigation à l'aide de données numériques GNSS, impacte les ressources de l'attention humaine et limite nos capacités innées de cognition spatiale. Pour atténuer la perte de compétence humaine, un front de recherche sur la cartographie (mobile) neuroadaptative est proposé ainsi que des premières étapes pour la création d'écrans d'informations géographiques mobile (mGID) bien conçus, qui non seulement répondent à la charge cognitive et aux ressources de l'attention visio-spatiale des utilisateurs navigateurs pendant la navigation temps-réel mais aussi qui soient capables d'élaborer un apprentissage spatial tout en assurant l'efficacité de la navigation. Cela aidera les humains à rester aussi indépendant de la technologie de l'information géographique qu'ils le souhaitent

    Use of Augmented Reality in Human Wayfinding: A Systematic Review

    Full text link
    Augmented reality technology has emerged as a promising solution to assist with wayfinding difficulties, bridging the gap between obtaining navigational assistance and maintaining an awareness of one's real-world surroundings. This article presents a systematic review of research literature related to AR navigation technologies. An in-depth analysis of 65 salient studies was conducted, addressing four main research topics: 1) current state-of-the-art of AR navigational assistance technologies, 2) user experiences with these technologies, 3) the effect of AR on human wayfinding performance, and 4) impacts of AR on human navigational cognition. Notably, studies demonstrate that AR can decrease cognitive load and improve cognitive map development, in contrast to traditional guidance modalities. However, findings regarding wayfinding performance and user experience were mixed. Some studies suggest little impact of AR on improving outdoor navigational performance, and certain information modalities may be distracting and ineffective. This article discusses these nuances in detail, supporting the conclusion that AR holds great potential in enhancing wayfinding by providing enriched navigational cues, interactive experiences, and improved situational awareness.Comment: 52 page

    Pedestrian Models for Autonomous Driving Part II: High-Level Models of Human Behavior

    Get PDF
    Abstract—Autonomous vehicles (AVs) must share space with pedestrians, both in carriageway cases such as cars at pedestrian crossings and off-carriageway cases such as delivery vehicles navigating through crowds on pedestrianized high-streets. Unlike static obstacles, pedestrians are active agents with complex, inter- active motions. Planning AV actions in the presence of pedestrians thus requires modelling of their probable future behaviour as well as detecting and tracking them. This narrative review article is Part II of a pair, together surveying the current technology stack involved in this process, organising recent research into a hierarchical taxonomy ranging from low-level image detection to high-level psychological models, from the perspective of an AV designer. This self-contained Part II covers the higher levels of this stack, consisting of models of pedestrian behaviour, from prediction of individual pedestrians’ likely destinations and paths, to game-theoretic models of interactions between pedestrians and autonomous vehicles. This survey clearly shows that, although there are good models for optimal walking behaviour, high-level psychological and social modelling of pedestrian behaviour still remains an open research question that requires many conceptual issues to be clarified. Early work has been done on descriptive and qualitative models of behaviour, but much work is still needed to translate them into quantitative algorithms for practical AV control

    Understanding Interactions for Smart Wheelchair Navigation in Crowds

    Get PDF

    LandMarkAR: An application to study virtual route instructions and the design of 3D landmarks for indoor pedestrian navigation with a mixed reality head-mounted display

    Get PDF
    Mixed Reality (MR) interfaces on head-mounted displays (HMDs) have the potential to replace screen-based interfaces as the primary interface to the digital world. They potentially offer a more immersive and less distracting experience compared to mobile phones, allowing users to stay focused on their environment and main goals while accessing digital information. Due to their ability to gracefully embed virtual information in the environment, MR HMDs could potentially alleviate some of the issues plaguing users of mobile pedestrian navigation systems, such as distraction, diminished route recall, and reduced spatial knowledge acquisition. However, the complexity of MR technology presents significant challenges, particularly for researchers with limited programming knowledge. This thesis presents “LandMarkAR” to address those challenges. “LandMarkAR” is a HoloLens application that allows researchers to create augmented territories to study human navigation with MR interfaces, even if they have little programming knowledge. “LandMarkAR” was designed using different methods from human-centered design (HCD), such as design thinking and think-aloud testing, and was developed with Unity and the Mixed Reality Toolkit (MRTK). With “LandMarkAR”, researchers can place and manipulate 3D objects as holograms in real-time, facilitating indoor navigation experiments using 3D objects that serve as turn-by-turn instructions, highlights of physical landmarks, or other information researchers may come up with. Researchers with varying technical expertise will be able to use “LandMarkAR” for MR navigation studies. They can opt to utilize the easy-to-use User Interface (UI) on the HoloLens or add custom functionality to the application directly in Unity. “LandMarkAR” empowers researchers to explore the full potential of MR interfaces in human navigation and create meaningful insights for their studies

    The Aalborg Survey / Part 4 - Literature Study:Diverse Urban Spaces (DUS)

    Get PDF

    Eyes-Off Physically Grounded Mobile Interaction

    Get PDF
    This thesis explores the possibilities, challenges and future scope for eyes-off, physically grounded mobile interaction. We argue that for interactions with digital content in physical spaces, our focus should not be constantly and solely on the device we are using, but fused with an experience of the places themselves, and the people who inhabit them. Through the design, development and evaluation of a series ofnovel prototypes we show the benefits of a more eyes-off mobile interaction style.Consequently, we are able to outline several important design recommendations for future devices in this area.The four key contributing chapters of this thesis each investigate separate elements within this design space. We begin by evaluating the need for screen-primary feedback during content discovery, showing how a more exploratory experience can be supported via a less-visual interaction style. We then demonstrate how tactilefeedback can improve the experience and the accuracy of the approach. In our novel tactile hierarchy design we add a further layer of haptic interaction, and show how people can be supported in finding and filtering content types, eyes-off. We then turn to explore interactions that shape the ways people interact with aphysical space. Our novel group and solo navigation prototypes use haptic feedbackfor a new approach to pedestrian navigation. We demonstrate how variations inthis feedback can support exploration, giving users autonomy in their navigationbehaviour, but with an underlying reassurance that they will reach the goal.Our final contributing chapter turns to consider how these advanced interactionsmight be provided for people who do not have the expensive mobile devices that areusually required. We extend an existing telephone-based information service to support remote back-of-device inputs on low-end mobiles. We conclude by establishingthe current boundaries of these techniques, and suggesting where their usage couldlead in the future

    Providing and assessing intelligible explanations in autonomous driving

    Get PDF
    Intelligent vehicles with automated driving functionalities provide many benefits, but also instigate serious concerns around human safety and trust. While the automotive industry has devoted enormous resources to realising vehicle autonomy, there exist uncertainties as to whether the technology would be widely adopted by society. Autonomous vehicles (AVs) are complex systems, and in challenging driving scenarios, they are likely to make decisions that could be confusing to end-users. As a way to bridge the gap between this technology and end-users, the provision of explanations is generally being put forward. While explanations are considered to be helpful, this thesis argues that explanations must also be intelligible (as obligated by the GDPR Article 12) to the intended stakeholders, and should make causal attributions in order to foster confidence and trust in end-users. Moreover, the methods for generating these explanations should be transparent for easy audit. To substantiate this argument, the thesis proceeds in four steps: First, we adopted a mixed method approach (in a user study N=101N=101) to elicit passengers' requirements for effective explainability in diverse autonomous driving scenarios. Second, we explored different representations, data structures and driving data annotation schemes to facilitate intelligible explanation generation and general explainability research in autonomous driving. Third, we developed transparent algorithms for posthoc explanation generation. These algorithms were tested within a collision risk assessment case study and an AV navigation case study, using the Lyft Level5 dataset and our new SAX dataset---a dataset that we have introduced for AV explainability research. Fourth, we deployed these algorithms in an immersive physical simulation environment and assessed (in a lab study N=39N=39) the impact of the generated explanations on passengers' perceived safety while varying the prediction accuracy of an AV's perception system and the specificity of the explanations. The thesis concludes by providing recommendations needed for the realisation of more effective explainable autonomous driving, and provides a future research agenda
    • …
    corecore