10 research outputs found

    Probabilistic Temporal Inference on Reconstructed 3D Scenes

    Get PDF
    ©2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Presented at the 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 13-18 June 2010, San Francisco, CA.DOI: 10.1109/CVPR.2010.5539803Modern structure from motion techniques are capable of building city-scale 3D reconstructions from large image collections, but have mostly ignored the problem of large-scale structural changes over time. We present a general framework for estimating temporal variables in structure from motion problems, including an unknown date for each camera and an unknown time interval for each structural element. Given a collection of images with mostly unknown or uncertain dates, we use this framework to automatically recover the dates of all images by reasoning probabilistically about the visibility and existence of objects in the scene. We present results on a collection of over 100 historical images of a city taken over decades of time

    Weakly Supervised Silhouette-based Semantic Scene Change Detection

    Full text link
    This paper presents a novel semantic scene change detection scheme with only weak supervision. A straightforward approach for this task is to train a semantic change detection network directly from a large-scale dataset in an end-to-end manner. However, a specific dataset for this task, which is usually labor-intensive and time-consuming, becomes indispensable. To avoid this problem, we propose to train this kind of network from existing datasets by dividing this task into change detection and semantic extraction. On the other hand, the difference in camera viewpoints, for example, images of the same scene captured from a vehicle-mounted camera at different time points, usually brings a challenge to the change detection task. To address this challenge, we propose a new siamese network structure with the introduction of correlation layer. In addition, we create a publicly available dataset for semantic change detection to evaluate the proposed method. The experimental results verified both the robustness to viewpoint difference in change detection task and the effectiveness for semantic change detection of the proposed networks. Our code and dataset are available at https://github.com/xdspacelab/sscdnet.Comment: Accepted at the 2020 IEEE International Conference on Robotics and Automation (ICRA). Code and dataset are available at https://github.com/xdspacelab/sscdne

    Browsing and Experiencing Repositories of Spatially Oriented Historic Photographic Images

    Get PDF
    Many institutions archive historical images of architecture in urban areas and make them available to scholars and the general public through online platforms. Users can explore these often huge repositories by faceted browsing or keyword-based searching. Metadata that enable these kinds of investigations, however, are often incomplete, imprecise, or even wrong. Thus, retrieving images of interest can be a cumbersome task for users such as art and architectural historians trying to answer their research questions. Many of these images, often containing historic buildings and landscapes, can be oriented spatially using automatic methods such as “structure from motion” (SfM). Providing spatially and temporally oriented images of urban architecture, in combination with advanced searching and exploration techniques, offers new potential in supporting historians in their research. We are developing a 3D web environment useful to historians enabling them to search and access historic photographic images in a spatial context. Related projects use 2D maps, showing only a planar view of the current urban situation. In this paper, we present an approach to create interactive views of 4D city models, i.e., 3D spatial models that show changes over time, to provide a better understanding of the urban building situation regarding the photographer’s position and surroundings. A major feature of the application is to make it possible to spatially align 3D reconstruction models to photogrammetric digitized models based on historical photographs. At the same time, this mixed methods approach is used for validation of the 3D reconstructions

    Dating Historical Color Images

    Full text link

    Detecting Changes in 3D Structure of a Scene from Multi-view Images Captured by a Vehicle-Mounted Camera

    Full text link
    This paper proposes a method for detecting temporal changes of the three-dimensional structure of an outdoor scene from its multi-view images captured at two separate times. For the images, we consider those captured by a camera mounted on a vehicle running in a city street. The method estimates scene structures probabilistically, not de-terministically, and based on their estimates, it evaluates the probability of structural changes in the scene, where the inputs are the similarity of the local image patches among the multi-view images. The aim of the probabilistic treat-ment is to maximize the accuracy of change detection, be-hind which there is our conjecture that although it is difficult to estimate the scene structures deterministically, it should be easier to detect their changes. The proposed method is compared with the methods that use multi-view stereo (MVS) to reconstruct the scene structures of the two time points and then differentiate them to detect changes. The experimental results show that the proposed method outper-forms such MVS-based methods. 1

    Temporal Tracking Urban Areas using Google Street View

    Get PDF
    Tracking the evolution of built environments is a challenging problem in computer vision due to the intrinsic complexity of urban scenes, as well as the dearth of temporal visual information from urban areas. Emerging technologies such as street view cars, provide massive amounts of high quality imagery data of urban environments at street-level (e.g., sidewalks, buildings, and aesthetics of streets). Such datasets are consistent with respect to space and time; hence, they could be a potential source for exploring the temporal changes transpiring in built environments. However, using street view images to detect temporal changes in urban scenes induces new challenges such as variation in illumination, camera pose, and appearance/disappearance of objects. In this thesis, we leverage Google Street View’s new feature, “time machine”, to track and label the temporal changes of built environments, specifically accessibility features (e.g., existence of curb-ramps, condition of sidewalks). The main contributions of this thesis are: (i) initial proof-of-concept automated method for tracking accessibility features through panorama images across time, (ii) a framework for processing and analyzing time series panoramas at scale, and (iii) a geo-temporal dataset including different types of accessibility features for the task of detection

    Vision based localization: from humanoid robots to visually impaired people

    Get PDF
    Nowadays, 3D applications have recently become a more and more popular topic in robotics, computer vision or augmented reality. By means of cameras and computer vision techniques, it is possible to obtain accurate 3D models of large-scale environments such as cities. In addition, cameras are low-cost, non-intrusive sensors compared to other sensors such as laser scanners. Furthermore, cameras also offer a rich information about the environment. One application of great interest is the vision-based localization in a prior 3D map. Robots need to perform tasks in the environment autonomously, and for this purpose, is very important to know precisely the location of the robot in the map. In the same way, providing accurate information about the location and spatial orientation of the user in a large-scale environment can be of benefit for those who suffer from visual impairment problems. A safe and autonomous navigation in unknown or known environments, can be a great challenge for those who are blind or are visually impaired. Most of the commercial solutions for visually impaired localization and navigation assistance are based on the satellite Global Positioning System (GPS). However, these solutions are not suitable enough for the visually impaired community in urban-environments. The errors are about of the order of several meters and there are also other problems such GPS signal loss or line-of-sight restrictions. In addition, GPS does not work if an insufficient number of satellites are directly visible. Therefore, GPS cannot be used for indoor environments. Thus, it is important to do further research on new more robust and accurate localization systems. In this thesis we propose several algorithms in order to obtain an accurate real-time vision-based localization from a prior 3D map. For that purpose, it is necessary to compute a 3D map of the environment beforehand. For computing that 3D map, we employ well-known techniques such as Simultaneous Localization and Mapping (SLAM) or Structure from Motion (SfM). In this thesis, we implement a visual SLAM system using a stereo camera as the only sensor that allows to obtain accurate 3D reconstructions of the environment. The proposed SLAM system is also capable to detect moving objects especially in a close range to the camera up to approximately 5 meters, thanks to a moving objects detection module. This is possible, thanks to a dense scene flow representation of the environment, that allows to obtain the 3D motion of the world points. This moving objects detection module seems to be very effective in highly crowded and dynamic environments, where there are a huge number of dynamic objects such as pedestrians. By means of the moving objects detection module we avoid adding erroneous 3D points into the SLAM process, yielding much better and consistent 3D reconstruction results. Up to the best of our knowledge, this is the first time that dense scene flow and derived detection of moving objects has been applied in the context of visual SLAM for challenging crowded and dynamic environments, such as the ones presented in this Thesis. In SLAM and vision-based localization approaches, 3D map points are usually described by means of appearance descriptors. By means of these appearance descriptors, the data association between 3D map elements and perceived 2D image features can be done. In this thesis we have investigated a novel family of appearance descriptors known as Gauge-Speeded Up Robust Features (G-SURF). Those descriptors are based on the use of gauge coordinates. By means of these coordinates every pixel in the image is fixed separately in its own local coordinate frame defined by the local structure itself and consisting of the gradient vector and its perpendicular direction. We have carried out an extensive experimental evaluation on different applications such as image matching, visual object categorization and 3D SfM applications that show the usefulness and improved results of G-SURF descriptors against other state-of-the-art descriptors such as the Scale Invariant Feature Transform (SIFT) or SURF. In vision-based localization applications, one of the most expensive computational steps is the data association between a large map of 3D points and perceived 2D features in the image. Traditional approaches often rely on purely appearence information for solving the data association step. These algorithms can have a high computational demand and for environments with highly repetitive textures, such as cities, this data association can lead to erroneous results due to the ambiguities introduced by visually similar features. In this thesis we have done an algorithm for predicting the visibility of 3D points by means of a memory based learning approach from a prior 3D reconstruction. Thanks to this learning approach, we can speed-up the data association step by means of the prediction of visible 3D points given a prior camera pose. We have implemented and evaluated visual SLAM and vision-based localization algorithms for two different applications of great interest: humanoid robots and visually impaired people. Regarding humanoid robots, a monocular vision-based localization algorithm with visibility prediction has been evaluated under different scenarios and different types of sequences such as square trajectories, circular, with moving objects, changes in lighting, etc. A comparison of the localization and mapping error has been done with respect to a precise motion capture system, yielding errors about the order of few cm. Furthermore, we also compared our vision-based localization system with respect to the Parallel Tracking and Mapping (PTAM) approach, obtaining much better results with our localization algorithm. With respect to the vision-based localization approach for the visually impaired, we have evaluated the vision-based localization system in indoor and cluttered office-like environments. In addition, we have evaluated the visual SLAM algorithm with moving objects detection considering test with real visually impaired users in very dynamic environments such as inside the Atocha railway station (Madrid, Spain) and in the city center of Alcalá de Henares (Madrid, Spain). The obtained results highlight the potential benefits of our approach for the localization of the visually impaired in large and cluttered environments
    corecore