3 research outputs found

    EgoViz – a Mobile Based Spatial Interaction System

    Get PDF
    This paper describes research carried out in the area of mobile spatial interaction and the development of a mobile (i.e. on-device) version of a simulated web-based 2D directional query processor. The TellMe application integrates location (from GPS, GSM, WiFi) and orientation (from digital compass/tilt sensors) sensing technologies into an enhanced spatial query processing module capable of exploiting a mobile device’s position and orientation for querying real-world 3D spatial datasets. This paper outlines the technique used to combine these technologies and the architecture needed to deploy them on a sensor enabled smartphone (i.e. Nokia 6210 Navigator). With all these sensor technologies now available on one device, it is possible to employ a personal query system that can work effectively in any environment using location and orientation as primary parameters for directional queries. In doing so, novel approaches for determining a user’s query space in 3 dimensions based on line-of-sight and 3D visibility (ego-visibility) are also investigated. The result is a mobile application that is location, direction and orientation aware and using these data is able to identify objects (e.g. buildings, points-of-interest, etc.) by pointing at them or when they are in a specified field-of-view

    Combining Mobile Technologies For Accurate, Open Source, Privacy Sensitive, Zero Cost, Location Determination

    Get PDF
    Determining the location of an object or individual using a mobile device (e.g. cell phone) is an important aspect of modern information gathering. Various solutions have been proposed which all have their strengths and weaknesses. To date, no solution has been devised for a mobile device that will work effectively in multiple environments and without assistance from network-provider connections1. To address this, it is argued that the current state of the art can be advanced using a hybrid approach that combines a number of sensor technologies to provide a more reliable, and accurate mobile location determination that functions in multiple environments (indoors and outdoors). This thesis examines in detail current relevant available technology, calculation techniques for location determination, the Global Navigation Satellite System (GNSS) and other noteworthy location determination research. It then introduces our solution of a hybrid positioning system that is an open-source, provider-network independent, privacy sensitive, zero-cost and accurate software component. First the overall system design is described and then individual modules are described in detail. It describes in full an algorithm that intelligently combines signals from various technologies, applies weights to these signals and also leverages past signal readings to enhance current calculations. Next, the evaluation section is introduced which discusses how and why the test bed was chosen and deployed. It then discusses individual test results and finally the overall tests are analysed, discussed and summarised. Finally, the conclusions are prepared in detail, the three initial questions raised in the introduction are answered and discussed and the contributions to the body of knowledge are reaffirmed. Future work finishes the thesis and looks at several research paths that can be pursued from this research

    Multimodal Content Delivery for Geo-services

    Get PDF
    This thesis describes a body of work carried out over several research projects in the area of multimodal interaction for location-based services. Research in this area has progressed from using simulated mobile environments to demonstrate the visual modality, to the ubiquitous delivery of rich media using multimodal interfaces (geo- services). To effectively deliver these services, research focused on innovative solutions to real-world problems in a number of disciplines including geo-location, mobile spatial interaction, location-based services, rich media interfaces and auditory user interfaces. My original contributions to knowledge are made in the areas of multimodal interaction underpinned by advances in geo-location technology and supported by the proliferation of mobile device technology into modern life. Accurate positioning is a known problem for location-based services, contributions in the area of mobile positioning demonstrate a hybrid positioning technology for mobile devices that uses terrestrial beacons to trilaterate position. Information overload is an active concern for location-based applications that struggle to manage large amounts of data, contributions in the area of egocentric visibility that filter data based on field-of-view demonstrate novel forms of multimodal input. One of the more pertinent characteristics of these applications is the delivery or output modality employed (auditory, visual or tactile). Further contributions in the area of multimodal content delivery are made, where multiple modalities are used to deliver information using graphical user interfaces, tactile interfaces and more notably auditory user interfaces. It is demonstrated how a combination of these interfaces can be used to synergistically deliver context sensitive rich media to users - in a responsive way - based on usage scenarios that consider the affordance of the device, the geographical position and bearing of the device and also the location of the device
    corecore