770 research outputs found

    Geomatics and Forensic: Progress and Challenges

    Get PDF
    Since graphics hold qualitative and quantitative information of complex crime scenes, it becomes a basic key to develop hypothesis in police investigations and also to prove these hypotheses in court. Forensic analysis involves tasks of scene information mining as well as its reconstruction in order to extract elements for explanatory police test or to show forensic evidence in legal proceedings. Currently, the combination of sensors and technologies allows the integration of spatial data and the generation of virtual infographic products (orthoimages, solid images, point clouds, cross‐sections, etc.) which are extremely attractive. These products, which successfully retain accurate 3D metric information, are revolutionizing dimensional reconstruction of objects and crime scenes. Thus, it can be said that the reconstruction and 3D visualization of complex scenes are one of the main challenges for the international scientific community. To overcome this challenge, techniques related with computer vision, computer graphics and geomatics work closely. This chapter reviews a set of geomatic techniques, applied to improve infographic forensic products, and its evolution. The integration of data from different sensors whose final purpose is 3D accurate modelling is also described. As we move into a highly active research area, where there are still many uncertainties to be resolved, the final section addresses these challenges and outlines future perspectives

    A 360 VR and Wi-Fi Tracking Based Autonomous Telepresence Robot for Virtual Tour

    Get PDF
    This study proposes a novel mobile robot teleoperation interface that demonstrates the applicability of a robot-aided remote telepresence system with a virtual reality (VR) device to a virtual tour scenario. To improve realism and provide an intuitive replica of the remote environment for the user interface, the implemented system automatically moves a mobile robot (viewpoint) while displaying a 360-degree live video streamed from the robot to a VR device (Oculus Rift). Upon the user choosing a destination location from a given set of options, the robot generates a route based on a shortest path graph and travels along that the route using a wireless signal tracking method that depends on measuring the direction of arrival (DOA) of radio signals. This paper presents an overview of the system and architecture, and discusses its implementation aspects. Experimental results show that the proposed system is able to move to the destination stably using the signal tracking method, and that at the same time, the user can remotely control the robot through the VR interface

    Geographic Information Systems and Science

    Get PDF
    Geographic information science (GISc) has established itself as a collaborative information-processing scheme that is increasing in popularity. Yet, this interdisciplinary and/or transdisciplinary system is still somewhat misunderstood. This book talks about some of the GISc domains encompassing students, researchers, and common users. Chapters focus on important aspects of GISc, keeping in mind the processing capability of GIS along with the mathematics and formulae involved in getting each solution. The book has one introductory and eight main chapters divided into five sections. The first section is more general and focuses on what GISc is and its relation to GIS and Geography, the second is about location analytics and modeling, the third on remote sensing data analysis, the fourth on big data and augmented reality, and, finally, the fifth looks over volunteered geographic information.info:eu-repo/semantics/publishedVersio

    Map-Based Localization for Unmanned Aerial Vehicle Navigation

    Get PDF
    Unmanned Aerial Vehicles (UAVs) require precise pose estimation when navigating in indoor and GNSS-denied / GNSS-degraded outdoor environments. The possibility of crashing in these environments is high, as spaces are confined, with many moving obstacles. There are many solutions for localization in GNSS-denied environments, and many different technologies are used. Common solutions involve setting up or using existing infrastructure, such as beacons, Wi-Fi, or surveyed targets. These solutions were avoided because the cost should be proportional to the number of users, not the coverage area. Heavy and expensive sensors, for example a high-end IMU, were also avoided. Given these requirements, a camera-based localization solution was selected for the sensor pose estimation. Several camera-based localization approaches were investigated. Map-based localization methods were shown to be the most efficient because they close loops using a pre-existing map, thus the amount of data and the amount of time spent collecting data are reduced as there is no need to re-observe the same areas multiple times. This dissertation proposes a solution to address the task of fully localizing a monocular camera onboard a UAV with respect to a known environment (i.e., it is assumed that a 3D model of the environment is available) for the purpose of navigation for UAVs in structured environments. Incremental map-based localization involves tracking a map through an image sequence. When the map is a 3D model, this task is referred to as model-based tracking. A by-product of the tracker is the relative 3D pose (position and orientation) between the camera and the object being tracked. State-of-the-art solutions advocate that tracking geometry is more robust than tracking image texture because edges are more invariant to changes in object appearance and lighting. However, model-based trackers have been limited to tracking small simple objects in small environments. An assessment was performed in tracking larger, more complex building models, in larger environments. A state-of-the art model-based tracker called ViSP (Visual Servoing Platform) was applied in tracking outdoor and indoor buildings using a UAVs low-cost camera. The assessment revealed weaknesses at large scales. Specifically, ViSP failed when tracking was lost, and needed to be manually re-initialized. Failure occurred when there was a lack of model features in the cameras field of view, and because of rapid camera motion. Experiments revealed that ViSP achieved positional accuracies similar to single point positioning solutions obtained from single-frequency (L1) GPS observations standard deviations around 10 metres. These errors were considered to be large, considering the geometric accuracy of the 3D model used in the experiments was 10 to 40 cm. The first contribution of this dissertation proposes to increase the performance of the localization system by combining ViSP with map-building incremental localization, also referred to as simultaneous localization and mapping (SLAM). Experimental results in both indoor and outdoor environments show sub-metre positional accuracies were achieved, while reducing the number of tracking losses throughout the image sequence. It is shown that by integrating model-based tracking with SLAM, not only does SLAM improve model tracking performance, but the model-based tracker alleviates the computational expense of SLAMs loop closing procedure to improve runtime performance. Experiments also revealed that ViSP was unable to handle occlusions when a complete 3D building model was used, resulting in large errors in its pose estimates. The second contribution of this dissertation is a novel map-based incremental localization algorithm that improves tracking performance, and increases pose estimation accuracies from ViSP. The novelty of this algorithm is the implementation of an efficient matching process that identifies corresponding linear features from the UAVs RGB image data and a large, complex, and untextured 3D model. The proposed model-based tracker improved positional accuracies from 10 m (obtained with ViSP) to 46 cm in outdoor environments, and improved from an unattainable result using VISP to 2 cm positional accuracies in large indoor environments. The main disadvantage of any incremental algorithm is that it requires the camera pose of the first frame. Initialization is often a manual process. The third contribution of this dissertation is a map-based absolute localization algorithm that automatically estimates the camera pose when no prior pose information is available. The method benefits from vertical line matching to accomplish a registration procedure of the reference model views with a set of initial input images via geometric hashing. Results demonstrate that sub-metre positional accuracies were achieved and a proposed enhancement of conventional geometric hashing produced more correct matches - 75% of the correct matches were identified, compared to 11%. Further the number of incorrect matches was reduced by 80%

    Videos in Context for Telecommunication and Spatial Browsing

    Get PDF
    The research presented in this thesis explores the use of videos embedded in panoramic imagery to transmit spatial and temporal information describing remote environments and their dynamics. Virtual environments (VEs) through which users can explore remote locations are rapidly emerging as a popular medium of presence and remote collaboration. However, capturing visual representation of locations to be used in VEs is usually a tedious process that requires either manual modelling of environments or the employment of specific hardware. Capturing environment dynamics is not straightforward either, and it is usually performed through specific tracking hardware. Similarly, browsing large unstructured video-collections with available tools is difficult, as the abundance of spatial and temporal information makes them hard to comprehend. At the same time, on a spectrum between 3D VEs and 2D images, panoramas lie in between, as they offer the same 2D images accessibility while preserving 3D virtual environments surrounding representation. For this reason, panoramas are an attractive basis for videoconferencing and browsing tools as they can relate several videos temporally and spatially. This research explores methods to acquire, fuse, render and stream data coming from heterogeneous cameras, with the help of panoramic imagery. Three distinct but interrelated questions are addressed. First, the thesis considers how spatially localised video can be used to increase the spatial information transmitted during video mediated communication, and if this improves quality of communication. Second, the research asks whether videos in panoramic context can be used to convey spatial and temporal information of a remote place and the dynamics within, and if this improves users' performance in tasks that require spatio-temporal thinking. Finally, the thesis considers whether there is an impact of display type on reasoning about events within videos in panoramic context. These research questions were investigated over three experiments, covering scenarios common to computer-supported cooperative work and video browsing. To support the investigation, two distinct video+context systems were developed. The first telecommunication experiment compared our videos in context interface with fully-panoramic video and conventional webcam video conferencing in an object placement scenario. The second experiment investigated the impact of videos in panoramic context on quality of spatio-temporal thinking during localization tasks. To support the experiment, a novel interface to video-collection in panoramic context was developed and compared with common video-browsing tools. The final experimental study investigated the impact of display type on reasoning about events. The study explored three adaptations of our video-collection interface to three display types. The overall conclusion is that videos in panoramic context offer a valid solution to spatio-temporal exploration of remote locations. Our approach presents a richer visual representation in terms of space and time than standard tools, showing that providing panoramic contexts to video collections makes spatio-temporal tasks easier. To this end, videos in context are suitable alternative to more difficult, and often expensive solutions. These findings are beneficial to many applications, including teleconferencing, virtual tourism and remote assistance

    Sky View Factors from Synthetic Fisheye Photos for Thermal Comfort Routing—A Case Study in Phoenix, Arizona

    Get PDF
    abstract: The Sky View Factor (SVF) is a dimension-reduced representation of urban form and one of the major variables in radiation models that estimate outdoor thermal comfort. Common ways of retrieving SVFs in urban environments include capturing fisheye photographs or creating a digital 3D city or elevation model of the environment. Such techniques have previously been limited due to a lack of imagery or lack of full scale detailed models of urban areas. We developed a web based tool that automatically generates synthetic hemispherical fisheye views from Google Earth at arbitrary spatial resolution and calculates the corresponding SVFs through equiangular projection. SVF results were validated using Google Maps Street View and compared to results from other SVF calculation tools. We generated 5-meter resolution SVF maps for two neighborhoods in Phoenix, Arizona to illustrate fine-scale variations of intra-urban horizon limitations due to urban form and vegetation. To demonstrate the utility of our synthetic fisheye approach for heat stress applications, we automated a radiation model to generate outdoor thermal comfort maps for Arizona State University’s Tempe campus for a hot summer day using synthetic fisheye photos and on-site meteorological data. Model output was tested against mobile transect measurements of the six-directional radiant flux density. Based on the thermal comfort maps, we implemented a pedestrian routing algorithm that is optimized for distance and thermal comfort preferences. Our synthetic fisheye approach can help planners assess urban design and tree planting strategies to maximize thermal comfort outcomes and can support heat hazard mitigation in urban areas

    Training of Crisis Mappers and Map Production from Multi-sensor Data: Vernazza Case Study (Cinque Terre National Park, Italy)

    Get PDF
    This aim of paper is to presents the development of a multidisciplinary project carried out by the cooperation between Politecnico di Torino and ITHACA (Information Technology for Humanitarian Assistance, Cooperation and Action). The goal of the project was the training in geospatial data acquiring and processing for students attending Architecture and Engineering Courses, in order to start up a team of "volunteer mappers". Indeed, the project is aimed to document the environmental and built heritage subject to disaster; the purpose is to improve the capabilities of the actors involved in the activities connected in geospatial data collection, integration and sharing. The proposed area for testing the training activities is the Cinque Terre National Park, registered in the World Heritage List since 1997. The area was affected by flood on the 25th of October 2011. According to other international experiences, the group is expected to be active after emergencies in order to upgrade maps, using data acquired by typical geomatic methods and techniques such as terrestrial and aerial Lidar, close-range and aerial photogrammetry, topographic and GNSS instruments etc.; or by non conventional systems and instruments such us UAV, mobile mapping etc. The ultimate goal is to implement a WebGIS platform to share all the data collected with local authorities and the Civil Protectio

    Scalable Methods to Collect and Visualize Sidewalk Accessibility Data for People with Mobility Impairments

    Get PDF
    Poorly maintained sidewalks pose considerable accessibility challenges for people with mobility impairments. Despite comprehensive civil rights legislation of Americans with Disabilities Act, many city streets and sidewalks in the U.S. remain inaccessible. The problem is not just that sidewalk accessibility fundamentally affects where and how people travel in cities, but also that there are few, if any, mechanisms to determine accessible areas of a city a priori. To address this problem, my Ph.D. dissertation introduces and evaluates new scalable methods for collecting data about street-level accessibility using a combination of crowdsourcing, automated methods, and Google Street View (GSV). My dissertation has four research threads. First, we conduct a formative interview study to establish a better understanding of how people with mobility impairments currently assess accessibility in the built environment and the role of emerging location-based technologies therein. The study uncovers the existing methods for assessing accessibility of physical environment and identify useful features of future assistive technologies. Second, we develop and evaluate scalable crowdsourced accessibility data collection methods. We show that paid crowd workers recruited from an online labor marketplace can find and label accessibility attributes in GSV with accuracy of 81%. This accuracy improves to 93% with quality control mechanisms such as majority vote. Third, we design a system that combines crowdsourcing and automated methods to increase data collection efficiency. Our work shows that by combining crowdsourcing and automated methods, we can increase data collection efficiency by 13% without sacrificing accuracy. Fourth, we develop and deploy a web tool that lets volunteers to help us collect the street-level accessibility data from Washington, D.C. As of writing this dissertation, we have collected the accessibility data from 20% of the streets in D.C. We conduct a preliminary evaluation on how the said web tool is used. Finally, we implement proof-of-concept accessibility-aware applications with accessibility data collected with the help of volunteers. My dissertation contributes to the accessibility, computer science, and HCI communities by: (i) extending the knowledge of how people with mobility impairments interact with technology to navigate in cities; (ii) introducing the first work that demonstrates that GSV is a viable source for learning about the accessibility of the physical world; (iii) introducing the first method that combines crowdsourcing and automated methods to remotely collect accessibility information; (iv) deploying interactive web tools that allow volunteers to help populate the largest dataset about street-level accessibility of the world; and (v) demonstrating accessibility-aware applications that empower people with mobility impairments
    corecore