58,992 research outputs found

    Testing Two Tools for Multimodal Navigation

    Get PDF
    The latest smartphones with GPS, electronic compasses, directional audio, touch screens, and so forth, hold a potential for location-based services that are easier to use and that let users focus on their activities and the environment around them. Rather than interpreting maps, users can search for information by pointing in a direction and database queries can be created from GPS location and compass data. Users can also get guidance to locations through point and sweep gestures, spatial sound, and simple graphics. This paper describes two studies testing two applications with multimodal user interfaces for navigation and information retrieval. The applications allow users to search for information and get navigation support using combinations of point and sweep gestures, nonspeech audio, graphics, and text. Tests show that users appreciated both applications for their ease of use and for allowing users to interact directly with the surrounding environment

    DeepNav: Learning to Navigate Large Cities

    Full text link
    We present DeepNav, a Convolutional Neural Network (CNN) based algorithm for navigating large cities using locally visible street-view images. The DeepNav agent learns to reach its destination quickly by making the correct navigation decisions at intersections. We collect a large-scale dataset of street-view images organized in a graph where nodes are connected by roads. This dataset contains 10 city graphs and more than 1 million street-view images. We propose 3 supervised learning approaches for the navigation task and show how A* search in the city graph can be used to generate supervision for the learning. Our annotation process is fully automated using publicly available mapping services and requires no human input. We evaluate the proposed DeepNav models on 4 held-out cities for navigating to 5 different types of destinations. Our algorithms outperform previous work that uses hand-crafted features and Support Vector Regression (SVR)[19].Comment: CVPR 2017 camera ready versio

    Spatial ability, urban wayfinding and location-based services:a review and first results

    Get PDF
    Location-Based Services (LBS) are a new industry at the core of which are GISand spatial databases. With increasing mobility of individuals, the anticipatedavailability of broadband communications for mobile devices and growingvolumes of location specific information available in databases there willinevitably be an increase in demand for services providing location relatedinformation to people on the move. New Information and CommunicationTechnologies (NICTs) are providing enhanced possibilities for navigating ?smartcities?. Urban environments, meanwhile, have increasing spatial complexity.Navigating urban environments is becoming an important issue. The time is ripefor a re-appraisal of urban wayfinding. This paper critically reviews the currentLBS applications and raises a series of questions with regard to LBS for urbanwayfinding. Research is being carried out to measure individuals? spatialability/awareness and their degree of preference for using LBS in wayfinding. Themethodology includes both the use of questionnaires and a virtual reality CAVE.Presented here are the results of the questionnaire survey which indicate therelationships between individuals? spatial ability, use of NICTs and modepreference for receiving wayfinding cues. Also discussed are our future researchdirections on LBS, particular on issues of urban wayfinding using NICTs
    • …
    corecore