6 research outputs found

    Mobile 2D and 3D Spatial Query Techniques for the Geospatial Web

    Get PDF
    The increasing availability of abundant geographically referenced information in the Geospatial Web provides a variety of opportunities for developing value-added LBS applications. However, large data volumes of the Geospatial Web and small mobile device displays impose a data visualization problem, as the amount of searchable information overwhelms the display when too many query results are returned. Excessive returned results clutter the mobile display, making it harder for users to prioritize information and causes confusion and usability problems. Mobile Spatial Interaction (MSI) research into this “information overload” problem is ongoing where map personalization and other semantic based filtering mechanisms are essential to de-clutter and adapt the exploration of the real-world to the processing/display limitations of mobile devices. In this thesis, we propose that another way to filter this information is to intelligently refine the search space. 3DQ (3-Dimensional Query) is our novel MSI prototype for information discovery on today’s location and orientation-aware smartphones within 3D Geospatial Web environments. Our application incorporates human interactions (interpreted from embedded sensors) in the geospatial query process by determining the shape of their actual visibility space as a query “window” in a spatial database, e.g. Isovist in 2D and Threat Dome in 3D. This effectively applies hidden query removal (HQR) functionality in 360º 3D that takes into account both the horizontal and vertical dimensions when calculating the 3D search space, significantly reducing display clutter and information overload on mobile devices. The effect is a more accurate and expected search result for mobile LBS applications by returning information on only those objects visible within a user’s 3D field-of-view. ii

    Mobile 2D and 3D Spatial Query Techniques for the Geospatial Web

    Get PDF
    The increasing availability of abundant geographically referenced information in the Geospatial Web provides a variety of opportunities for developing value-added LBS applications. However, large data volumes of the Geospatial Web and small mobile device displays impose a data visualization problem, as the amount of searchable information overwhelms the display when too many query results are returned. Excessive returned results clutter the mobile display, making it harder for users to prioritize information and causes confusion and usability problems. Mobile Spatial Interaction (MSI) research into this “information overload” problem is ongoing where map personalization and other semantic based filtering mechanisms are essential to de-clutter and adapt the exploration of the real-world to the processing/display limitations of mobile devices. In this thesis, we propose that another way to filter this information is to intelligently refine the search space. 3DQ (3-Dimensional Query) is our novel MSI prototype for information discovery on today’s location and orientation-aware smartphones within 3D Geospatial Web environments. Our application incorporates human interactions (interpreted from embedded sensors) in the geospatial query process by determining the shape of their actual visibility space as a query “window” in a spatial database, e.g. Isovist in 2D and Threat Dome in 3D. This effectively applies hidden query removal (HQR) functionality in 360º 3D that takes into account both the horizontal and vertical dimensions when calculating the 3D search space, significantly reducing display clutter and information overload on mobile devices. The effect is a more accurate and expected search result for mobile LBS applications by returning information on only those objects visible within a user’s 3D field-of-view

    Interacting with virtual reality scenes on mobile devices

    No full text
    This paper discusses alternative approaches for interacting with virtual reality scenes on mobile devices, based upon work conducted as part of the locus project [4]. Three prototypes are introduced that adopt different interaction paradigms for mobile virtual reality scenes: interaction can be via the screen only, movement and gestures within the real world environment, or a mixture of these two approaches. The paper concludes by suggesting that interaction via movement and gestures within the may be a more intuitive approach for mobile virtual reality scenes
    corecore