1,341 research outputs found

    Multimodal Content Delivery for Geo-services

    Get PDF
    This thesis describes a body of work carried out over several research projects in the area of multimodal interaction for location-based services. Research in this area has progressed from using simulated mobile environments to demonstrate the visual modality, to the ubiquitous delivery of rich media using multimodal interfaces (geo- services). To effectively deliver these services, research focused on innovative solutions to real-world problems in a number of disciplines including geo-location, mobile spatial interaction, location-based services, rich media interfaces and auditory user interfaces. My original contributions to knowledge are made in the areas of multimodal interaction underpinned by advances in geo-location technology and supported by the proliferation of mobile device technology into modern life. Accurate positioning is a known problem for location-based services, contributions in the area of mobile positioning demonstrate a hybrid positioning technology for mobile devices that uses terrestrial beacons to trilaterate position. Information overload is an active concern for location-based applications that struggle to manage large amounts of data, contributions in the area of egocentric visibility that filter data based on field-of-view demonstrate novel forms of multimodal input. One of the more pertinent characteristics of these applications is the delivery or output modality employed (auditory, visual or tactile). Further contributions in the area of multimodal content delivery are made, where multiple modalities are used to deliver information using graphical user interfaces, tactile interfaces and more notably auditory user interfaces. It is demonstrated how a combination of these interfaces can be used to synergistically deliver context sensitive rich media to users - in a responsive way - based on usage scenarios that consider the affordance of the device, the geographical position and bearing of the device and also the location of the device

    EgoViz – a Mobile Based Spatial Interaction System

    Get PDF
    This paper describes research carried out in the area of mobile spatial interaction and the development of a mobile (i.e. on-device) version of a simulated web-based 2D directional query processor. The TellMe application integrates location (from GPS, GSM, WiFi) and orientation (from digital compass/tilt sensors) sensing technologies into an enhanced spatial query processing module capable of exploiting a mobile device’s position and orientation for querying real-world 3D spatial datasets. This paper outlines the technique used to combine these technologies and the architecture needed to deploy them on a sensor enabled smartphone (i.e. Nokia 6210 Navigator). With all these sensor technologies now available on one device, it is possible to employ a personal query system that can work effectively in any environment using location and orientation as primary parameters for directional queries. In doing so, novel approaches for determining a user’s query space in 3 dimensions based on line-of-sight and 3D visibility (ego-visibility) are also investigated. The result is a mobile application that is location, direction and orientation aware and using these data is able to identify objects (e.g. buildings, points-of-interest, etc.) by pointing at them or when they are in a specified field-of-view

    A Web and Mobile System for Environmental Decision Support

    Get PDF
    Current field data collection methods for many of today’s scientific and other observer/monitor type applications are still entrenched in the “clipboard age”, requiring manual data transcription to a database management system at some (often much) later date, and only allows for visualisation and analysis of recently captured field data “back in the lab”. This chapter is targeted at progressing today’s pen & paper methodology into the spatially enabled mobile computing age of realtime multi-media data input, integration, visualisation, and analysis simultaneously both in the field and the lab. The system described is customized to the specific needs of the Canadian Great Lakes Laboratory for Fisheries and Aquatic Sciences Fish Habitat Management Group requirements for fish species at risk assessment, but is ready for adaptation to other environmental agency applications (e.g. forestry, health-pesticide monitoring, agriculture, etc.). The chapter is ideally suited to all agencies responsible for collecting field data of any type that have not yet moved to a state-of-the-art mobile and wireless data collection, visualisation, and analysis work methodolog

    Three-dimensional interactive maps: theory and practice

    Get PDF

    Mobile capture of remote points of interest using line of sight modelling

    Get PDF
    Recording points of interest using GPS whilst working in the field is an established technique in geographical fieldwork, where the user’s current position is used as the spatial reference to be captured; this is known as geo-tagging. We outline the development and evaluation of a smartphone application called Zapp that enables geo-tagging of any distant point on the visible landscape. The ability of users to log or retrieve information relating to what they can see, rather than where they are standing, allows them to record observations of points in the broader landscape scene, or to access descriptions of landscape features from any viewpoint. The application uses the compass orientation and tilt of the phone to provide data for a line of sight algorithm that intersects with a Digital Surface Model stored on the mobile device. We describe the development process and design decisions for Zapp present the results of a controlled study of the accuracy of the application, and report on the use of Zapp for a student field exercise. The studies indicate the feasibility of the approach, but also how the appropriate use of such techniques will be constrained by current levels of precision in mobile sensor technology. The broader implications for interactive query of the distant landscape and for remote data logging are discussed

    Precision Agriculture Workflow, from Data Collection to Data Management Using FOSS Tools: An Application in Northern Italy Vineyard

    Get PDF
    In the past decades, technology-based agriculture, also known as Precision Agriculture (PA) or smart farming, has grown, developing new technologies and innovative tools to manage data for the whole agricultural processes. In this framework, geographic information, and spatial data and tools such as UAVs (Unmanned Aerial Vehicles) and multispectral optical sensors play a crucial role in the geomatics as support techniques. PA needs software to store and process spatial data and the Free and Open Software System (FOSS) community kept pace with PA’s needs: several FOSS software tools have been developed for data gathering, analysis, and restitution. The adoption of FOSS solutions, WebGIS platforms, open databases, and spatial data infrastructure to process and store spatial and nonspatial acquired data helps to share information among different actors with user-friendly solutions. Nevertheless, a comprehensive open-source platform that, besides processing UAV data, allows directly storing, visualising, sharing, and querying the final results and the related information does not exist. Indeed, today, the PA’s data elaboration and management with a FOSS approach still require several different software tools. Moreover, although some commercial solutions presented platforms to support management in PA activities, none of these present a complete workflow including data from acquisition phase to processed and stored information. In this scenario, the paper aims to provide UAV and PA users with a FOSS-replicable methodology that can fit farming activities’ operational and management needs. Therefore, this work focuses on developing a totally FOSS workflow to visualise, process, analyse, and manage PA data. In detail, a multidisciplinary approach is adopted for creating an operative web-sharing tool able to manage Very High Resolution (VHR) agricultural multispectral-derived information gathered by UAV systems. A vineyard in Northern Italy is used as an example to show the workflow of data generation and the data structure of the web tool. A UAV survey was carried out using a six-band multispectral camera and the data were elaborated through the Structure from Motion (SfM) technique, resulting in 3 cm resolution orthophoto. A supervised classifier identified the phenological stage of under-row weeds and the rows with a 95% overall accuracy. Then, a set of GIS-developed algorithms allowed Individual Tree Detection (ITD) and spectral indices for monitoring the plant-based phytosanitary conditions. A spatial data structure was implemented to gather the data at canopy scale. The last step of the workflow concerned publishing data in an interactive 3D webGIS, allowing users to update the spatial database. The webGIS can be operated from web browsers and desktop GIS. The final result is a shared open platform obtained with nonproprietary software that can store data of different sources and scales

    Mobile Visibility Querying for LBS

    Full text link
    corecore