12 research outputs found

    Evaluation of Skybox Video and Still Image products

    Get PDF

    Object Tracking Based on Satellite Videos: A Literature Review

    Get PDF
    Video satellites have recently become an attractive method of Earth observation, providing consecutive images of the Earth’s surface for continuous monitoring of specific events. The development of on-board optical and communication systems has enabled the various applications of satellite image sequences. However, satellite video-based target tracking is a challenging research topic in remote sensing due to its relatively low spatial and temporal resolution. Thus, this survey systematically investigates current satellite video-based tracking approaches and benchmark datasets, focusing on five typical tracking applications: traffic target tracking, ship tracking, typhoon tracking, fire tracking, and ice motion tracking. The essential aspects of each tracking target are summarized, such as the tracking architecture, the fundamental characteristics, primary motivations, and contributions. Furthermore, popular visual tracking benchmarks and their respective properties are discussed. Finally, a revised multi-level dataset based on WPAFB videos is generated and quantitatively evaluated for future development in the satellite video-based tracking area. In addition, 54.3% of the tracklets with lower Difficulty Score (DS) are selected and renamed as the Easy group, while 27.2% and 18.5% of the tracklets are grouped into the Medium-DS group and the Hard-DS group, respectively

    Enhancing Multi-View 3D-Reconstruction Using Multi-Frame Super Resolution

    Get PDF
    Multi-view stereo is a popular method for 3D-reconstruction. Super resolution is a technique used to produce high resolution output from low resolution input. Since the quality of 3D-reconstruction is directly dependent on the input, a simple path is to improve the resolution of the input. In this dissertation, we explore the idea of using super resolution to improve 3D-reconstruction at the input stage of the multi-view stereo framework. In particular, we show that multi-view stereo when combined with multi-frame super resolution produces a more accurate 3D-reconstruction. The proposed method utilizes images with sub-pixel camera movements to produce high resolution output. This enhanced output is fed through the multi-view stereo pipeline to produce an improved 3D-model. As a performance test, the improved 3D-model is compared to similarly generated 3D-reconstructions using bicubic and single image super resolution at the input stage of the multi-view stereo framework. This is done by comparing the point clouds of the generated models to a reference model using the metrics: average, median, and max distance. The model that has the metrics that are closest to the reference model is considered to be the better model. The overall experimental results show that the generated models, using our technique, have point clouds with average mean, median, and max distances of 4.3\%, 8.8\%, and 6\% closer to the reference model, respectively. This indicates an improvement in 3D-reconstruction using our technique. In addition, our technique has a significant speed advantage over the single image super resolution analogs being at least 6.8x faster. The use of multi-frame super resolution in conjunction with the multi-view stereo framework is a practical solution for enhancing the quality of 3D-reconstruction and shows promising results over single image up-sampling techniques

    Flood dynamics derived from video remote sensing

    Get PDF
    Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models. Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science

    On Martian Surface Exploration: Development of Automated 3D Reconstruction and Super-Resolution Restoration Techniques for Mars Orbital Images

    Get PDF
    Very high spatial resolution imaging and topographic (3D) data play an important role in modern Mars science research and engineering applications. This work describes a set of image processing and machine learning methods to produce the “best possible” high-resolution and high-quality 3D and imaging products from existing Mars orbital imaging datasets. The research work is described in nine chapters of which seven are based on separate published journal papers. These include a) a hybrid photogrammetric processing chain that combines the advantages of different stereo matching algorithms to compute stereo disparity with optimal completeness, fine-scale details, and minimised matching artefacts; b) image and 3D co-registration methods that correct a target image and/or 3D data to a reference image and/or 3D data to achieve robust cross-instrument multi-resolution 3D and image co-alignment; c) a deep learning network and processing chain to estimate pixel-scale surface topography from single-view imagery that outperforms traditional photogrammetric methods in terms of product quality and processing speed; d) a deep learning-based single-image super-resolution restoration (SRR) method to enhance the quality and effective resolution of Mars orbital imagery; e) a subpixel-scale 3D processing system using a combination of photogrammetric 3D reconstruction, SRR, and photoclinometric 3D refinement; and f) an optimised subpixel-scale 3D processing system using coupled deep learning based single-view SRR and deep learning based 3D estimation to derive the best possible (in terms of visual quality, effective resolution, and accuracy) 3D products out of present epoch Mars orbital images. The resultant 3D imaging products from the above listed new developments are qualitatively and quantitatively evaluated either in comparison with products from the official NASA planetary data system (PDS) and/or ESA planetary science archive (PSA) releases, and/or in comparison with products generated with different open-source systems. Examples of the scientific application of these novel 3D imaging products are discussed

    Deep Vision in Optical Imagery: From Perception to Reasoning

    Get PDF
    Deep learning has achieved extraordinary success in a wide range of tasks in computer vision field over the past years. Remote sensing data present different properties as compared to natural images/videos, due to their unique imaging technique, shooting angle, etc. For instance, hyperspectral images usually have hundreds of spectral bands, offering additional information, and the size of objects (e.g., vehicles) in remote sensing images is quite limited, which brings challenges for detection or segmentation tasks. This thesis focuses on two kinds of remote sensing data, namely hyper/multi-spectral and high-resolution images, and explores several methods to try to find answers to the following questions: - In comparison with natural images or videos in computer vision, the unique asset of hyper/multi-spectral data is their rich spectral information. But what this “additional” information brings for learning a network? And how do we take full advantage of these spectral bands? - Remote sensing images at high resolution have pretty different characteristics, bringing challenges for several tasks, for example, small object segmentation. Can we devise tailored networks for such tasks? - Deep networks have produced stunning results in a variety of perception tasks, e.g., image classification, object detection, and semantic segmentation. While the capacity to reason about relations over space is vital for intelligent species. Can a network/module with the capacity of reasoning benefit to parsing remote sensing data? To this end, a couple of networks are devised to figure out what a network learns from hyperspectral images and how to efficiently use spectral bands. In addition, a multi-task learning network is investigated for the instance segmentation of vehicles from aerial images and videos. Finally, relational reasoning modules are designed to improve semantic segmentation of aerial images

    On Multi-Resolution 3D Orbital Imagery and Visualisation for Mars Geological Analysis

    Get PDF
    Mars Science Laboratory has revealed a dynamic history of water as the rover has ascended the mysterious Mount Sharp in Gale crater. Because rovers only “see” their local environment, planetary scientists rely on satellite-based orbital imagery to understand the regional geology of Gale crater. However, orbital imagery is map-view—viewed from above, lacking perspective—which presents challenges to interpretation of stratigraphy. 3D visualisation is an emerging opportunity to study orbital images in more intuitive, field-like environments, but has had limited application to Mars. In this work, I formulate and analyse 3D orbital imagery over Gale crater, Mars to investigate the stratigraphy of Mount Sharp 700 m above and 40 km away from MSL. First, I process orbital imagery from the HRSC, CTX, and HiRISE cameras into 3D digital terrain models (DTMs). I then co-register and evaluate these DTMs using statistical tools and existing products to build a new, validated, multi-resolution basemap tied down to MOLA. Sakarya Vallis, a 400-m deep canyon on Mount Sharp, was then analysed in a 3D environment at 1 m/px. From measurements of exposed rock layers, I construct cross-sections, stratigraphic logs, and a geological unit map to capture this geology. Seven geological units are interpreted across 1 km of exposure, varying in thicknesses (10–174 m) and dips (3–12Âș). These units may reveal a cyclic depositional environment; a progradational sequence and channel; and unconformities. This work therefore suggests two periods of sub-aqueous deposition in this region during the Late Noachian to Early Hesperian. These results further provide geological context of Gale crater as MSL ascends Mount Sharp, and future inputs for palaeoenvironmental models of Gale crater

    ESARDA 39th Annual Meeting: 2017 Symposium

    Get PDF
    The 39th ESARDA symposium on Safeguards and Nuclear Non-Proliferation was held in DĂŒsseldorf, Germany from 16-18 May, 2017. The Symposium has been preceded by meetings of the ESARDA Working Groups on 15 May 2017. The event has once again been an opportunity for research organisations, safeguards authorities and nuclear plant operators to exchange information on new aspects of international safeguards and non-proliferation, as well as recent developments in nuclear safeguards and non-proliferation related research activities and their implications for the safeguards community.JRC.G.II.7-Nuclear securit

    Urban Informatics

    Get PDF
    This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently – to become ‘smart’ and ‘sustainable’. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ‘big’ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity

    Urban Informatics

    Get PDF
    This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently – to become ‘smart’ and ‘sustainable’. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ‘big’ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity
    corecore