254 research outputs found

    Self-supervised monocular depth estimation from oblique UAV videos

    Get PDF
    UAVs have become an essential photogrammetric measurement as they are affordable, easily accessible and versatile. Aerial images captured from UAVs have applications in small and large scale texture mapping, 3D modelling, object detection tasks, DTM and DSM generation etc. Photogrammetric techniques are routinely used for 3D reconstruction from UAV images where multiple images of the same scene are acquired. Developments in computer vision and deep learning techniques have made Single Image Depth Estimation (SIDE) a field of intense research. Using SIDE techniques on UAV images can overcome the need for multiple images for 3D reconstruction. This paper aims to estimate depth from a single UAV aerial image using deep learning. We follow a self-supervised learning approach, Self-Supervised Monocular Depth Estimation (SMDE), which does not need ground truth depth or any extra information other than images for learning to estimate depth. Monocular video frames are used for training the deep learning model which learns depth and pose information jointly through two different networks, one each for depth and pose. The predicted depth and pose are used to reconstruct one image from the viewpoint of another image utilising the temporal information from videos. We propose a novel architecture with two 2D CNN encoders and a 3D CNN decoder for extracting information from consecutive temporal frames. A contrastive loss term is introduced for improving the quality of image generation. Our experiments are carried out on the public UAVid video dataset. The experimental results demonstrate that our model outperforms the state-of-the-art methods in estimating the depths.Comment: Submitted to ISPRS Journal of Photogrammetry and Remote Sensin

    Assessment of the CORONA series of satellite imagery for landscape archaeology: a case study from the Orontes valley, Syria

    Get PDF
    In 1995, a large database of satellite imagery with worldwide coverage taken from 1960 until 1972 was declassified. The main advantages of this imagery known as CORONA that made it attractive for archaeology were its moderate cost and its historical value. The main disadvantages were its unknown quality, format, geometry and the limited base of known applications. This thesis has sought to explore the properties and potential of CORONA imagery and thus enhance its value for applications in landscape archaeology. In order to ground these investigations in a real dataset, the properties and characteristics of CORONA imagery were explored through the case study of a landscape archaeology project working in the Orontes Valley, Syria. Present-day high-resolution IKONOS imagery was integrated within the study and assessed alongside CORONA imagery. The combination of these two image datasets was shown to provide a powerful set of tools for investigating past archaeological landscape in the Middle East. The imagery was assessed qualitatively through photointerpretation for its ability to detect archaeological remains, and quantitatively through the extraction of height information after the creation of stereomodels. The imagery was also assessed spectrally through fieldwork and spectroradiometric analysis, and for its Multiple View Angle (MVA) capability through visual and statistical analysis. Landscape archaeology requires a variety of data to be gathered from a large area, in an effective and inexpensive way. This study demonstrates an effective methodology for the deployment of CORONA and IKONOS imagery and raises a number of technical points of which the archaeological researcher community need to be aware of. Simultaneously, it identified certain limitations of the data and suggested solutions for the more effective exploitation of the strengths of CORONA imagery

    Quality Assessment of Stereoscopic 360-degree Images from Multi-viewports

    Full text link
    Objective quality assessment of stereoscopic panoramic images becomes a challenging problem owing to the rapid growth of 360-degree contents. Different from traditional 2D image quality assessment (IQA), more complex aspects are involved in 3D omnidirectional IQA, especially unlimited field of view (FoV) and extra depth perception, which brings difficulty to evaluate the quality of experience (QoE) of 3D omnidirectional images. In this paper, we propose a multi-viewport based fullreference stereo 360 IQA model. Due to the freely changeable viewports when browsing in the head-mounted display (HMD), our proposed approach processes the image inside FoV rather than the projected one such as equirectangular projection (ERP). In addition, since overall QoE depends on both image quality and depth perception, we utilize the features estimated by the difference map between left and right views which can reflect disparity. The depth perception features along with binocular image qualities are employed to further predict the overall QoE of 3D 360 images. The experimental results on our public Stereoscopic OmnidirectionaL Image quality assessment Database (SOLID) show that the proposed method achieves a significant improvement over some well-known IQA metrics and can accurately reflect the overall QoE of perceived images

    Stereoscopic Omnidirectional Image Quality Assessment Based on Predictive Coding Theory

    Full text link
    Objective quality assessment of stereoscopic omnidirectional images is a challenging problem since it is influenced by multiple aspects such as projection deformation, field of view (FoV) range, binocular vision, visual comfort, etc. Existing studies show that classic 2D or 3D image quality assessment (IQA) metrics are not able to perform well for stereoscopic omnidirectional images. However, very few research works have focused on evaluating the perceptual visual quality of omnidirectional images, especially for stereoscopic omnidirectional images. In this paper, based on the predictive coding theory of the human vision system (HVS), we propose a stereoscopic omnidirectional image quality evaluator (SOIQE) to cope with the characteristics of 3D 360-degree images. Two modules are involved in SOIQE: predictive coding theory based binocular rivalry module and multi-view fusion module. In the binocular rivalry module, we introduce predictive coding theory to simulate the competition between high-level patterns and calculate the similarity and rivalry dominance to obtain the quality scores of viewport images. Moreover, we develop the multi-view fusion module to aggregate the quality scores of viewport images with the help of both content weight and location weight. The proposed SOIQE is a parametric model without necessary of regression learning, which ensures its interpretability and generalization performance. Experimental results on our published stereoscopic omnidirectional image quality assessment database (SOLID) demonstrate that our proposed SOIQE method outperforms state-of-the-art metrics. Furthermore, we also verify the effectiveness of each proposed module on both public stereoscopic image datasets and panoramic image datasets

    Guidance for benthic habitat mapping: an aerial photographic approach

    Get PDF
    This document, Guidance for Benthic Habitat Mapping: An Aerial Photographic Approach, describes proven technology that can be applied in an operational manner by state-level scientists and resource managers. This information is based on the experience gained by NOAA Coastal Services Center staff and state-level cooperators in the production of a series of benthic habitat data sets in Delaware, Florida, Maine, Massachusetts, New York, Rhode Island, the Virgin Islands, and Washington, as well as during Center-sponsored workshops on coral remote sensing and seagrass and aquatic habitat assessment. (PDF contains 39 pages) The original benthic habitat document, NOAA Coastal Change Analysis Program (C-CAP): Guidance for Regional Implementation (Dobson et al.), was published by the Department of Commerce in 1995. That document summarized procedures that were to be used by scientists throughout the United States to develop consistent and reliable coastal land cover and benthic habitat information. Advances in technology and new methodologies for generating these data created the need for this updated report, which builds upon the foundation of its predecessor

    Perceptual Quality-of-Experience of Stereoscopic 3D Images and Videos

    Get PDF
    With the fast development of 3D acquisition, communication, processing and display technologies, automatic quality assessment of 3D images and videos has become ever important. Nevertheless, recent progress on 3D image quality assessment (IQA) and video quality assessment (VQA) remains limited. The purpose of this research is to investigate various aspects of human visual quality-of-experience (QoE) when viewing stereoscopic 3D images/videos and to develop objective quality assessment models that automatically predict visual QoE of 3D images/videos. Firstly, we create a new subjective 3D-IQA database that has two features that are lacking in the literature, i.e., the inclusion of both 2D and 3D images, and the inclusion of mixed distortion types. We observe strong distortion type dependent bias when using the direct average of 2D image quality to predict 3D image quality. We propose a binocular rivalry inspired multi-scale model to predict the quality of stereoscopic images and the results show that the proposed model eliminates the prediction bias, leading to significantly improved quality predictions. Second, we carry out two subjective studies on depth perception of stereoscopic 3D images. The first one follows a traditional framework where subjects are asked to rate depth quality directly on distorted stereopairs. The second one uses a novel approach, where the stimuli are synthesized independent of the background image content and the subjects are asked to identify depth changes and label the polarities of depth. Our analysis shows that the second approach is much more effective at singling out the contributions of stereo cues in depth perception. We initialize the notion of depth perception difficulty index (DPDI) and propose a novel computational model for DPDI prediction. The results show that the proposed model leads to highly promising DPDI prediction performance. Thirdly, we carry out subjective 3D-VQA experiments on two databases that contain various asymmetrically compressed stereoscopic 3D videos. We then compare different mixed-distortions asymmetric stereoscopic video coding schemes with symmetric coding methods and verify their potential coding gains. We propose a model to account for the prediction bias from using direct averaging of 2D video quality to predict 3D video quality. The results show that the proposed model leads to significantly improved quality predictions and can help us predict the coding gain of mixed-distortions asymmetric video compression. Fourthly, we investigate the problem of objective quality assessment of Multi-view-plus-depth (MVD) images, with a main focus on the pre- depth-image-based-rendering (pre-DIBR) case. We find that existing IQA methods are difficult to be employed as a guiding criterion in the optimization of MVD video coding and transmission systems when applied post-DIBR. We propose a novel pre-DIBR method based on information content weighting of both texture and depth images, which demonstrates competitive performance against state-of-the-art IQA models applied post-DIBR

    Compression and Subjective Quality Assessment of 3D Video

    Get PDF
    In recent years, three-dimensional television (3D TV) has been broadly considered as the successor to the existing traditional two-dimensional television (2D TV) sets. With its capability of offering a dynamic and immersive experience, 3D video (3DV) is expected to expand conventional video in several applications in the near future. However, 3D content requires more than a single view to deliver the depth sensation to the viewers and this, inevitably, increases the bitrate compared to the corresponding 2D content. This need drives the research trend in video compression field towards more advanced and more efficient algorithms. Currently, the Advanced Video Coding (H.264/AVC) is the state-of-the-art video coding standard which has been developed by the Joint Video Team of ISO/IEC MPEG and ITU-T VCEG. This codec has been widely adopted in various applications and products such as TV broadcasting, video conferencing, mobile TV, and blue-ray disc. One important extension of H.264/AVC, namely Multiview Video Coding (MVC) was an attempt to multiple view compression by taking into consideration the inter-view dependency between different views of the same scene. This codec H.264/AVC with its MVC extension (H.264/MVC) can be used for encoding either conventional stereoscopic video, including only two views, or multiview video, including more than two views. In spite of the high performance of H.264/MVC, a typical multiview video sequence requires a huge amount of storage space, which is proportional to the number of offered views. The available views are still limited and the research has been devoted to synthesizing an arbitrary number of views using the multiview video and depth map (MVD). This process is mandatory for auto-stereoscopic displays (ASDs) where many views are required at the viewer side and there is no way to transmit such a relatively huge number of views with currently available broadcasting technology. Therefore, to satisfy the growing hunger for 3D related applications, it is mandatory to further decrease the bitstream by introducing new and more efficient algorithms for compressing multiview video and depth maps. This thesis tackles the 3D content compression targeting different formats i.e. stereoscopic video and depth-enhanced multiview video. Stereoscopic video compression algorithms introduced in this thesis mostly focus on proposing different types of asymmetry between the left and right views. This means reducing the quality of one view compared to the other view aiming to achieve a better subjective quality against the symmetric case (the reference) and under the same bitrate constraint. The proposed algorithms to optimize depth-enhanced multiview video compression include both texture compression schemes as well as depth map coding tools. Some of the introduced coding schemes proposed for this format include asymmetric quality between the views. Knowing that objective metrics are not able to accurately estimate the subjective quality of stereoscopic content, it is suggested to perform subjective quality assessment to evaluate different codecs. Moreover, when the concept of asymmetry is introduced, the Human Visual System (HVS) performs a fusion process which is not completely understood. Therefore, another important aspect of this thesis is conducting several subjective tests and reporting the subjective ratings to evaluate the perceived quality of the proposed coded content against the references. Statistical analysis is carried out in the thesis to assess the validity of the subjective ratings and determine the best performing test cases
    corecore