1,853 research outputs found

    Assessment of a photogrammetric approach for urban DSM extraction from tri-stereoscopic satellite imagery

    Get PDF
    Built-up environments are extremely complex for 3D surface modelling purposes. The main distortions that hamper 3D reconstruction from 2D imagery are image dissimilarities, concealed areas, shadows, height discontinuities and discrepancies between smooth terrain and man-made features. A methodology is proposed to improve automatic photogrammetric extraction of an urban surface model from high resolution satellite imagery with the emphasis on strategies to reduce the effects of the cited distortions and to make image matching more robust. Instead of a standard stereoscopic approach, a digital surface model is derived from tri-stereoscopic satellite imagery. This is based on an extensive multi-image matching strategy that fully benefits from the geometric and radiometric information contained in the three images. The bundled triplet consists of an IKONOS along-track pair and an additional near-nadir IKONOS image. For the tri-stereoscopic study a densely built-up area, extending from the centre of Istanbul to the urban fringe, is selected. The accuracy of the model extracted from the IKONOS triplet, as well as the model extracted from only the along-track stereopair, are assessed by comparison with 3D check points and 3D building vector data

    Procedural City Generation with Combined Architectures for Real-time Visualization

    Get PDF
    The work and research of this paper sought to build upon traditional city generation and simulation in creating a tool that both realistically simulates cities and their prominent features and also creates aesthetic and artistically rich cities using assets that combine several contemporary or near contemporary architectural styles. The major city features simulated are the surrounding terrain, road networks, individual buildings, and building placement. The tools used to both create and integrate these features were created in Houdini with Unreal Engine 5 as the intended final destination. This research was influenced by the city, town, and road networking of Ghost Recon:Wildlands. Both games exhibit successful creation and integration of cities in a real-time open world that creates a holistic and visually compelling experience. The software used in the development of this project were Houdini, Maya, Unreal Engine 5, and Zbrush, as well as Adobe Substance Designer, Substance Painter, and Photoshop. The city generation tool was built with the intent that it would be flexible. In this context flexibility refers to the capability to create many different kinds of city regions based on user specifications. Region size, road density and connectivity, and building types are examples of qualities of the city that can be directly controlled. The tool currently uses one set of city assets created with intent for use together and an overall design cohesion but is also built flexibly enough that new building assets could be included, only requiring the addition of building generators for the new set. Alternatively, assets developed with the current generation methods in mind could also be used to change the visual style of the city. Buildings were both generated and placed based on a district classification. Buildings were established as small residential, large residential, religious buildings, and government/commercial before being placed in appropriate locations in the city based on user district specifications

    Tracking icebergs with time-lapse photography and sparse optical flow, LeConte Bay, Alaska, 2016–2017

    Get PDF
    We present a workflow to track icebergs in proglacial fjords using oblique time-lapse photos and the Lucas-Kanade optical flow algorithm. We employ the workflow at LeConte Bay, Alaska, where we ran five time-lapse cameras between April 2016 and September 2017, capturing more than 400 000 photos at frame rates of 0.5–4.0 min−1. Hourly to daily average velocity fields in map coordinates illustrate dynamic currents in the bay, with dominant downfjord velocities (exceeding 0.5 m s−1 intermittently) and several eddies. Comparisons with simultaneous Acoustic Doppler Current Profiler (ADCP) measurements yield best agreement for the uppermost ADCP levels (∼ 12 m and above), in line with prevalent small icebergs that trace near-surface currents. Tracking results from multiple cameras compare favorably, although cameras with lower frame rates (0.5 min−1) tend to underestimate high flow speeds. Tests to determine requisite temporal and spatial image resolution confirm the importance of high image frame rates, while spatial resolution is of secondary importance. Application of our procedure to other fjords will be successful if iceberg concentrations are high enough and if the camera frame rates are sufficiently rapid (at least 1 min−1 for conditions similar to LeConte Bay).This work was funded by the U.S. National Science Foundation (OPP-1503910, OPP-1504288, OPP-1504521 and OPP-1504191).Ye

    Image fusion techniqes for remote sensing applications

    Get PDF
    Image fusion refers to the acquisition, processing and synergistic combination of information provided by various sensors or by the same sensor in many measuring contexts. The aim of this survey paper is to describe three typical applications of data fusion in remote sensing. The first study case considers the problem of the Synthetic Aperture Radar (SAR) Interferometry, where a pair of antennas are used to obtain an elevation map of the observed scene; the second one refers to the fusion of multisensor and multitemporal (Landsat Thematic Mapper and SAR) images of the same site acquired at different times, by using neural networks; the third one presents a processor to fuse multifrequency, multipolarization and mutiresolution SAR images, based on wavelet transform and multiscale Kalman filter. Each study case presents also results achieved by the proposed techniques applied to real data

    GFF: Gated Fully Fusion for Semantic Segmentation

    Full text link
    Semantic segmentation generates comprehensive understanding of scenes through densely predicting the category for each pixel. High-level features from Deep Convolutional Neural Networks already demonstrate their effectiveness in semantic segmentation tasks, however the coarse resolution of high-level features often leads to inferior results for small/thin objects where detailed information is important. It is natural to consider importing low level features to compensate for the lost detailed information in high-level features.Unfortunately, simply combining multi-level features suffers from the semantic gap among them. In this paper, we propose a new architecture, named Gated Fully Fusion (GFF), to selectively fuse features from multiple levels using gates in a fully connected way. Specifically, features at each level are enhanced by higher-level features with stronger semantics and lower-level features with more details, and gates are used to control the propagation of useful information which significantly reduces the noises during fusion. We achieve the state of the art results on four challenging scene parsing datasets including Cityscapes, Pascal Context, COCO-stuff and ADE20K.Comment: accepted by AAAI-2020(oral
    • …
    corecore