7,516 research outputs found

    Semantic Cross-View Matching

    Full text link
    Matching cross-view images is challenging because the appearance and viewpoints are significantly different. While low-level features based on gradient orientations or filter responses can drastically vary with such changes in viewpoint, semantic information of images however shows an invariant characteristic in this respect. Consequently, semantically labeled regions can be used for performing cross-view matching. In this paper, we therefore explore this idea and propose an automatic method for detecting and representing the semantic information of an RGB image with the goal of performing cross-view matching with a (non-RGB) geographic information system (GIS). A segmented image forms the input to our system with segments assigned to semantic concepts such as traffic signs, lakes, roads, foliage, etc. We design a descriptor to robustly capture both, the presence of semantic concepts and the spatial layout of those segments. Pairwise distances between the descriptors extracted from the GIS map and the query image are then used to generate a shortlist of the most promising locations with similar semantic concepts in a consistent spatial layout. An experimental evaluation with challenging query images and a large urban area shows promising results

    Placing objects in context via inpainting for out-of-distribution segmentation

    Get PDF
    When deploying a semantic segmentation model into the real world, it will inevitably be confronted with semantic classes unseen during training. Thus, to safely deploy such systems, it is crucial to accurately evaluate and improve their anomaly segmentation capabilities. However, acquiring and labelling semantic segmentation data is expensive and unanticipated conditions are long-tail and potentially hazardous. Indeed, existing anomaly segmentation datasets capture a limited number of anomalies, lack realism or have strong domain shifts. In this paper, we propose the Placing Objects in Context (POC) pipeline to realistically add any object into any image via diffusion models. POC can be used to easily extend any dataset with an arbitrary number of objects. In our experiments, we present different anomaly segmentation datasets based on POC-generated data and show that POC can improve the performance of recent state-of-the-art anomaly fine-tuning methods in several standardized benchmarks. POC is also effective to learn new classes. For example, we use it to edit Cityscapes samples by adding a subset of Pascal classes and show that models trained on such data achieve comparable performance to the Pascal-trained baseline. This corroborates the low sim-to-real gap of models trained on POC-generated images

    Uses and Challenges of Collecting LiDAR Data from a Growing Autonomous Vehicle Fleet: Implications for Infrastructure Planning and Inspection Practices

    Get PDF
    Autonomous vehicles (AVs) that utilize LiDAR (Light Detection and Ranging) and other sensing technologies are becoming an inevitable part of transportation industry. Concurrently, transportation agencies are increasingly challenged with the management and tracking of large-scale highway asset inventory. LiDAR has become popular among transportation agencies for highway asset management given its advantage over traditional surveying methods. The affordability of LiDAR technology is increasing day by day. Given this, there will be substantial challenges and opportunities for the utilization of big data resulting from the growth of AVs with LiDAR. A proper understanding of the data size generated from this technology will help agencies in making decisions regarding storage, management, and transmission of the data. The original raw data generated from the sensor shrinks a lot after filtering and processing following the Cache county Road Manual and storing into ASPRS recommended (.las) file format. In this pilot study, it is found that while considering the road centerline as the vehicle trajectory larger portion of the data fall into the right of way section compared to the actual vehicle trajectory in Cache County, UT. And there is a positive relation between the data size and vehicle speed in terms of the travel lanes section given the nature of the selected highway environment

    Exposure modelling of transmission towers using street-level imagery and a deep learning object detection model

    Get PDF
    Exposure modelling is a vital component of disaster risk assessments, providing geospatial information of assets at risk and their characteristics. Detailed information about exposure bring benefits to the spatial representation of a rapidly changing environment and allows decision makers to establish better policies aimed at reducing disaster risk. This work proposes and demonstrates a methodology aimed at linking together volunteered geographic information from OpenStreetMap (OSM), street-level imagery from Google Street View (GSV) and deep learning object detection models into the automated creation of exposure datasets of power grid transmission towers, an asset particularly vulnerable to strong wind among other perils. The methodology is implemented through a start-to-end pipeline that starting from the locations of transmission towers derived from the power grid layer of OSMs world infrastructure, can assign relevant features of the tower based on the identification and classification returned from an object detection model over street-level imagery of the tower, obtained from GSV. The initial outcomes yielded promising results towards the establishment of the exposure dataset. For the identification task, the YOLOv5 model returned a mean average precision (mAP) of 83.57% at intersection over union (IoU) of 50%. For the classification problem, although predictive performance varies significantly among tower types, we show that high values of mAP can be achieved when there is a sufficiently high number of good quality images with which to train the model. (c) 2022, National Technical University of Athens. All rights reserved

    The Young and Bright Type Ia Supernova ASASSN-14lp: Discovery, Early-Time Observations, First-Light Time, Distance to NGC 4666, and Progenitor Constraints

    Full text link
    On 2014 Dec. 9.61, the All-Sky Automated Survey for SuperNovae (ASAS-SN or "Assassin") discovered ASASSN-14lp just ∼2\sim2 days after first light using a global array of 14-cm diameter telescopes. ASASSN-14lp went on to become a bright supernova (V=11.94V = 11.94 mag), second only to SN 2014J for the year. We present prediscovery photometry (with a detection less than a day after first light) and ultraviolet through near-infrared photometric and spectroscopic data covering the rise and fall of ASASSN-14lp for more than 100 days. We find that ASASSN-14lp had a broad light curve (Δm15(B)=0.80±0.05\Delta m_{15}(B) = 0.80 \pm 0.05), a BB-band maximum at 2457015.82±0.032457015.82 \pm 0.03, a rise time of 16.94−0.10+0.1116.94^{+ 0.11 }_{- 0.10 } days, and moderate host--galaxy extinction (E(B−V)host=0.33±0.06E(B-V)_{\textrm{host}} = 0.33 \pm 0.06). Using ASASSN-14lp we derive a distance modulus for NGC 4666 of μ=30.8±0.2\mu = 30.8 \pm 0.2 corresponding to a distance of 14.7±1.514.7 \pm 1.5 Mpc. However, adding ASASSN-14lp to the calibrating sample of Type Ia supernovae still requires an independent distance to the host galaxy. Finally, using our early-time photometric and spectroscopic observations, we rule out red giant secondaries and, assuming a favorable viewing angle and explosion time, any non-degenerate companion larger than 0.34Rsun0.34 R_{\textrm{sun}}.Comment: 12 pages, 9 figures, 4 tables. Accepted to ApJ. Photometric data presented in this submission are included as an ancillary file. For a brief video explaining this paper, see https://www.youtube.com/watch?v=1bOV-Cqs-a
    • …
    corecore