1,608 research outputs found

    Enhanced change detection index for disaster response, recovery assessment and monitoring of buildings and critical facilities-A case study for Muzzaffarabad, Pakistan

    Get PDF
    The availability of Very High Resolution (VHR) optical sensors and a growing image archive that is frequently updated, allows the use of change detection in post-disaster recovery and monitoring for robust and rapid results. The proposed semi-automated GIS object-based method uses readily available pre-disaster GIS data and adds existing knowledge into the processing to enhance change detection. It also allows targeting specific types of changes pertaining to similar man-made objects such as buildings and critical facilities. The change detection method is based on pre/post normalized index, gradient of intensity, texture and edge similarity filters within the object and a set of training data. More emphasis is put on the building edges to capture the structural damage in quantifying change after disaster. Once the change is quantified, based on training data, the method can be used automatically to detect change in order to observe recovery over time in potentially large areas. Analysis over time can also contribute to obtaining a full picture of the recovery and development after disaster, thereby giving managers a better understanding of productive management and recovery practices. The recovery and monitoring can be analyzed using the index in zones extending from to epicentre of disaster or administrative boundaries over time.EU FP

    Image resolution enhancement using dual-tree complex wavelet transform

    Get PDF
    In this letter, a complex wavelet-domain image resolution enhancement algorithm based on the estimation of wavelet coefficients is proposed. The method uses a forward and inverse dual-tree complex wavelet transform (DT-CWT) to construct a high-resolution (HR) image from the given low-resolution (LR) image. The HR image is reconstructed from the LR image, together with a set of wavelet coefficients, using the inverse DT-CWT. The set of wavelet coefficients is estimated from the DT-CWT decomposition of the rough estimation of the HR image. Results are presented and discussed on very HR QuickBird data, through comparisons between state-of-the-art resolution enhancement methods

    Unlocking Masked Autoencoders as Loss Function for Image and Video Restoration

    Full text link
    Image and video restoration has achieved a remarkable leap with the advent of deep learning. The success of deep learning paradigm lies in three key components: data, model, and loss. Currently, many efforts have been devoted to the first two while seldom study focuses on loss function. With the question ``are the de facto optimization functions e.g., L1L_1, L2L_2, and perceptual losses optimal?'', we explore the potential of loss and raise our belief ``learned loss function empowers the learning capability of neural networks for image and video restoration''. Concretely, we stand on the shoulders of the masked Autoencoders (MAE) and formulate it as a `learned loss function', owing to the fact the pre-trained MAE innately inherits the prior of image reasoning. We investigate the efficacy of our belief from three perspectives: 1) from task-customized MAE to native MAE, 2) from image task to video task, and 3) from transformer structure to convolution neural network structure. Extensive experiments across multiple image and video tasks, including image denoising, image super-resolution, image enhancement, guided image super-resolution, video denoising, and video enhancement, demonstrate the consistent performance improvements introduced by the learned loss function. Besides, the learned loss function is preferable as it can be directly plugged into existing networks during training without involving computations in the inference stage. Code will be publicly available

    A Non-Reference Evaluation of Underwater Image Enhancement Methods Using a New Underwater Image Dataset

    Get PDF
    The rise of vision-based environmental, marine, and oceanic exploration research highlights the need for supporting underwater image enhancement techniques to help mitigate water effects on images such as blurriness, low color contrast, and poor quality. This paper presents an evaluation of common underwater image enhancement techniques using a new underwater image dataset. The collected dataset is comprised of 100 images of aquatic plants taken at a shallow depth of up to three meters from three different locations in the Great Lake Superior, USA, via a Remotely Operated Vehicle (ROV) equipped with a high-definition RGB camera. In particular, we use our dataset to benchmark nine state-of-the-art image enhancement models at three different depths using a set of common non-reference image quality evaluation metrics. Then we provide a comparative analysis of the performance of the selected models at different depths and highlight the most prevalent ones. The obtained results show that the selected image enhancement models are capable of producing considerably better-quality images with some models performing better than others at certain depths
    corecore