9 research outputs found

    EgoViz – a Mobile Based Spatial Interaction System

    Get PDF
    This paper describes research carried out in the area of mobile spatial interaction and the development of a mobile (i.e. on-device) version of a simulated web-based 2D directional query processor. The TellMe application integrates location (from GPS, GSM, WiFi) and orientation (from digital compass/tilt sensors) sensing technologies into an enhanced spatial query processing module capable of exploiting a mobile device’s position and orientation for querying real-world 3D spatial datasets. This paper outlines the technique used to combine these technologies and the architecture needed to deploy them on a sensor enabled smartphone (i.e. Nokia 6210 Navigator). With all these sensor technologies now available on one device, it is possible to employ a personal query system that can work effectively in any environment using location and orientation as primary parameters for directional queries. In doing so, novel approaches for determining a user’s query space in 3 dimensions based on line-of-sight and 3D visibility (ego-visibility) are also investigated. The result is a mobile application that is location, direction and orientation aware and using these data is able to identify objects (e.g. buildings, points-of-interest, etc.) by pointing at them or when they are in a specified field-of-view

    Multimodal Content Delivery for Geo-services

    Get PDF
    This thesis describes a body of work carried out over several research projects in the area of multimodal interaction for location-based services. Research in this area has progressed from using simulated mobile environments to demonstrate the visual modality, to the ubiquitous delivery of rich media using multimodal interfaces (geo- services). To effectively deliver these services, research focused on innovative solutions to real-world problems in a number of disciplines including geo-location, mobile spatial interaction, location-based services, rich media interfaces and auditory user interfaces. My original contributions to knowledge are made in the areas of multimodal interaction underpinned by advances in geo-location technology and supported by the proliferation of mobile device technology into modern life. Accurate positioning is a known problem for location-based services, contributions in the area of mobile positioning demonstrate a hybrid positioning technology for mobile devices that uses terrestrial beacons to trilaterate position. Information overload is an active concern for location-based applications that struggle to manage large amounts of data, contributions in the area of egocentric visibility that filter data based on field-of-view demonstrate novel forms of multimodal input. One of the more pertinent characteristics of these applications is the delivery or output modality employed (auditory, visual or tactile). Further contributions in the area of multimodal content delivery are made, where multiple modalities are used to deliver information using graphical user interfaces, tactile interfaces and more notably auditory user interfaces. It is demonstrated how a combination of these interfaces can be used to synergistically deliver context sensitive rich media to users - in a responsive way - based on usage scenarios that consider the affordance of the device, the geographical position and bearing of the device and also the location of the device

    3DQ: Threat Dome Visibility Querying on Mobile Devices

    Get PDF
    3DQ (Three Dimensional Query) is our mobile spatial interaction (MSI) prototype for location and orientation aware mobile devices (i.e. today\u27s sensor enabled smartphones). The prototype tailors a military style threat dome query calculation using MSI with hidden query removal functionality for reducing “information overload” on these off-the-shelf devices. The effect gives a more accurate and expected query result for Location-Based Services (LBS) applications by returning information on only those objects visible within a user’s 3D field-of-view. Our standardised XML based request/response design enables any mobile device, regardless of operating system and/or programming language, to access the 3DQ web-service interfaces

    A Web and Mobile System for Environmental Decision Support

    Get PDF
    Current field data collection methods for many of today’s scientific and other observer/monitor type applications are still entrenched in the “clipboard age”, requiring manual data transcription to a database management system at some (often much) later date, and only allows for visualisation and analysis of recently captured field data “back in the lab”. This chapter is targeted at progressing today’s pen & paper methodology into the spatially enabled mobile computing age of realtime multi-media data input, integration, visualisation, and analysis simultaneously both in the field and the lab. The system described is customized to the specific needs of the Canadian Great Lakes Laboratory for Fisheries and Aquatic Sciences Fish Habitat Management Group requirements for fish species at risk assessment, but is ready for adaptation to other environmental agency applications (e.g. forestry, health-pesticide monitoring, agriculture, etc.). The chapter is ideally suited to all agencies responsible for collecting field data of any type that have not yet moved to a state-of-the-art mobile and wireless data collection, visualisation, and analysis work methodolog

    Mobile 2D and 3D Spatial Query Techniques for the Geospatial Web

    Get PDF
    The increasing availability of abundant geographically referenced information in the Geospatial Web provides a variety of opportunities for developing value-added LBS applications. However, large data volumes of the Geospatial Web and small mobile device displays impose a data visualization problem, as the amount of searchable information overwhelms the display when too many query results are returned. Excessive returned results clutter the mobile display, making it harder for users to prioritize information and causes confusion and usability problems. Mobile Spatial Interaction (MSI) research into this “information overload” problem is ongoing where map personalization and other semantic based filtering mechanisms are essential to de-clutter and adapt the exploration of the real-world to the processing/display limitations of mobile devices. In this thesis, we propose that another way to filter this information is to intelligently refine the search space. 3DQ (3-Dimensional Query) is our novel MSI prototype for information discovery on today’s location and orientation-aware smartphones within 3D Geospatial Web environments. Our application incorporates human interactions (interpreted from embedded sensors) in the geospatial query process by determining the shape of their actual visibility space as a query “window” in a spatial database, e.g. Isovist in 2D and Threat Dome in 3D. This effectively applies hidden query removal (HQR) functionality in 360º 3D that takes into account both the horizontal and vertical dimensions when calculating the 3D search space, significantly reducing display clutter and information overload on mobile devices. The effect is a more accurate and expected search result for mobile LBS applications by returning information on only those objects visible within a user’s 3D field-of-view. ii

    Mobile 2D and 3D Spatial Query Techniques for the Geospatial Web

    Get PDF
    The increasing availability of abundant geographically referenced information in the Geospatial Web provides a variety of opportunities for developing value-added LBS applications. However, large data volumes of the Geospatial Web and small mobile device displays impose a data visualization problem, as the amount of searchable information overwhelms the display when too many query results are returned. Excessive returned results clutter the mobile display, making it harder for users to prioritize information and causes confusion and usability problems. Mobile Spatial Interaction (MSI) research into this “information overload” problem is ongoing where map personalization and other semantic based filtering mechanisms are essential to de-clutter and adapt the exploration of the real-world to the processing/display limitations of mobile devices. In this thesis, we propose that another way to filter this information is to intelligently refine the search space. 3DQ (3-Dimensional Query) is our novel MSI prototype for information discovery on today’s location and orientation-aware smartphones within 3D Geospatial Web environments. Our application incorporates human interactions (interpreted from embedded sensors) in the geospatial query process by determining the shape of their actual visibility space as a query “window” in a spatial database, e.g. Isovist in 2D and Threat Dome in 3D. This effectively applies hidden query removal (HQR) functionality in 360º 3D that takes into account both the horizontal and vertical dimensions when calculating the 3D search space, significantly reducing display clutter and information overload on mobile devices. The effect is a more accurate and expected search result for mobile LBS applications by returning information on only those objects visible within a user’s 3D field-of-view

    Integrating Haptic Feedback into Mobile Location Based Services

    Get PDF
    Haptics is a feedback technology that takes advantage of the human sense of touch by applying forces, vibrations, and/or motions to a haptic-enabled device such as a mobile phone. Historically, human-computer interaction has been visual - text and images on the screen. Haptic feedback can be an important additional method especially in Mobile Location Based Services such as knowledge discovery, pedestrian navigation and notification systems. A knowledge discovery system called the Haptic GeoWand is a low interaction system that allows users to query geo-tagged data around them by using a point-and-scan technique with their mobile device. Haptic Pedestrian is a navigation system for walkers. Four prototypes have been developed classified according to the user’s guidance requirements, the user type (based on spatial skills), and overall system complexity. Haptic Transit is a notification system that provides spatial information to the users of public transport. In all these systems, haptic feedback is used to convey information about location, orientation, density and distance by use of the vibration alarm with varying frequencies and patterns to help understand the physical environment. Trials elicited positive responses from the users who see benefit in being provided with a “heads up” approach to mobile navigation. Results from a memory recall test show that the users of haptic feedback for navigation had better memory recall of the region traversed than the users of landmark images. Haptics integrated into a multi-modal navigation system provides more usable, less distracting but more effective interaction than conventional systems. Enhancements to the current work could include integration of contextual information, detailed large-scale user trials and the exploration of using haptics within confined indoor spaces

    User-centred design of smartphone augmented reality in urban tourism context.

    Get PDF
    Exposure to new and unfamiliar environments is a necessary part of nearly everyone’s life. Effective communication of location-based information through various locationbased service interfaces (LBSIs) became a key concern for cartographers, geographers, human-computer interaction (HCI) and professional designers alike. Much attention is directed towards Augmented Reality (AR) interfaces. Smartphone AR browsers deliver information about physical objects through spatially registered virtual annotations and can function as an interface to (geo)spatial and attribute data. Such applications have considerable potential for tourism. Recently, the number of studies discussing the optimal placement and layout of AR content increased. Results, however, do not scale well to the domain of urban tourism, because: 1) in any urban destination, many objects can be augmented with information; 2) each object can be a source of a substantial amount of information; 3) the incoming video feed is visually heterogeneous and complex; 4) the target user group is in an unfamiliar environment; 5) tourists have different information needs from urban residents. Adopting a User-Centred Design (UCD) approach, the main aim of this research project was to make a theoretical contribution to design knowledge relevant to effective support for (geo)spatial knowledge acquisition in unfamiliar urban environments. The research activities were divided in four (iterative) stages: (1) theoretical, (2) requirements analysis, (3) design and (4) evaluation. After critical analysis of existing literature on design of AR, the theoretical stage involved development of a theoretical user-centred design framework, capturing current knowledge in several relevant disciplines. In the second stage, user requirements gathering was carried out through a field quasi experiment where tourists were asked to use AR browsers in an unfamiliar for them environment. Qualitative and quantitative data were used to identify key relationships, extend the user-centred design framework and generate hypotheses about effective and efficient design. In the third stage, several design alternatives were developed and used to test the hypotheses through a laboratory-based quantitative study with 90 users. The results indicate that information acquisition through AR browsers is more effective and efficient if at least one element within the AR annotation matches the perceived visual characteristics or inferred non-visual attributes of target physical objects. Finally, in order to ensure that all major constructs and relationships are identified, qualitative evaluation of AR annotations was carried out by HCI and GIS domain-expert users in an unfamiliar urban tourism context. The results show that effective information acquisition in urban tourism context will depend on the visual design and delivered content through AR annotations for both visible and non-visible points of interest. All results were later positioned within existing theory in order to develop a final conceptual user-centred design framework that shifts the perspective towards a more thorough understanding of the overall design space for mobile AR interfaces. The dissertation has theoretical, methodological and practical implications. The main theoretical contribution of this thesis is to Information Systems Design Theory. The developed framework provides knowledge regarding the design of mobile AR. It can be used for hypotheses generation and further empirical evaluations of AR interfaces that facilitate knowledge acquisition in different types of environments and for different user groups. From a methodological point of view, the described userbased studies showcase how a UCD approach could be applied to design and evaluation of novel smartphone interfaces within the travel and tourism domain. Within industry the proposed framework could be used as a frame of reference by designers and developers who are not familiar with knowledge acquisition in urban environments and/or mobile AR interfaces

    EgoViz – a Mobile Based Spatial Interaction System

    Get PDF
    This paper describes research carried out in the area of mobile spatial interaction and the development of a mobile (i.e. on-device) version of a simulated web-based 2D directional query processor. The TellMe application integrates location (from GPS, GSM, WiFi) and orientation (from digital compass/tilt sensors) sensing technologies into an enhanced spatial query processing module capable of exploiting a mobile device’s position and orientation for querying real-world 3D spatial datasets. This paper outlines the technique used to combine these technologies and the architecture needed to deploy them on a sensor enabled smartphone (i.e. Nokia 6210 Navigator). With all these sensor technologies now available on one device, it is possible to employ a personal query system that can work effectively in any environment using location and orientation as primary parameters for directional queries. In doing so, novel approaches for determining a user’s query space in 3 dimensions based on line-of-sight and 3D visibility (ego-visibility) are also investigated. The result is a mobile application that is location, direction and orientation aware and using these data is able to identify objects (e.g. buildings, points-of-interest, etc.) by pointing at them or when they are in a specified field-of-view
    corecore