316 research outputs found

    Diffusion-based inpainting for coding remote-sensing data

    Get PDF
    Inpainting techniques based on partial differential equations (PDEs) such as diffusion processes are gaining growing importance as a novel family of image compression methods. Nevertheless, the application of inpainting in the field of hyperspectral imagery has been mainly focused on filling in missing information or dead pixels due to sensor failures. In this paper we propose a novel PDE-based inpainting algorithm to compress hyperspectral images. The method inpaints separately the known data in the spatial and in the spectral dimensions. Then it applies a prediction model to the final inpainting solution to obtain a representation much closer to the original image. Experimental results over a set of hyperspectral images indicate that the proposed algorithm can perform better than a recent proposed extension to prediction-based standard CCSDS-123.0 at low bitrate, better than JPEG 2000 Part 2 with the DWT 9/7 as a spectral transform at all bit-rates, and competitive to JPEG 2000 with principal component analysis (PCA), the optimal spectral decorrelation transform for Gaussian sources

    Remote sensing data retouching based on image inpainting algorithms in the forgery generation problem

    Get PDF
    Исследуются алгоритмы ретуширования изображений при генерировании поддельных данных дистанционного зондирования Земли. Приводится обзор существующих нейросетевых решений в области генерирования и доопределения изображений дистанционного зондирования. Для ретуширования данных дистанционного зондирования Земли применяются алгоритмы доопределения изображений на основе свёрточных нейронных сетей и генеративно-состязательных нейронных сетей. Особое внимание уделяется генеративной нейросети с обособленным блоком предсказания контуров, включающей две последовательно соединённые генеративно-состязательные подсети. Первая подсеть доопределяет контуры изображения внутри ретушируемой области. Вторая подсеть использует доопределённые контуры для генерирования результирующей ретуширующей области. В качестве базы для сравнения используется прецедентный алгоритм доопределения изображений. Проводятся вычислительные эксперименты по исследованию эффективности указанных алгоритмов при ретушировании реальных данных дистанционного зондирования различных видов. Выполняется сравнительный анализ качества работы рассматриваемых алгоритмов в зависимости от типа, формы и размеров ретушируемых объектов и областей. Приводятся качественные и количественные характеристики эффективности работы исследуемых алгоритмов доопределения изображений при ретушировании данных дистанционного зондирования Земли. Экспериментально обосновывается преимущество генеративно-состязательных нейронных сетей при создании поддельных данных дистанционного зондирования. We investigate image retouching algorithms for generating forgery Earth remote sensing data. We provide an overview of existing neural network solutions in the field of generation and inpainting of remote sensing images. To retouch Earth remote sensing data, we use image-inpainting algorithms based on convolutional neural networks and generative-adversarial neural networks. We pay special attention to a generative neural network with a separate contour prediction block that includes two series-connected generative-adversarial subnets. The first subnet inpaints contours of the image within the retouched area. The second subnet uses the inpainted contours to generate the resulting retouch area. As a basis for comparison, we use exemplar-based algorithms of image inpainting. We carry out computational experiments to study the effectiveness of these algorithms when retouching natural data of remote sensing of various types. We perform a comparative analysis of the quality of the algorithms considered, depending on the type, shape and size of the retouched objects and areas. We give qualitative and quantitative characteristics of the efficiency of the studied image inpainting algorithms when retouching Earth remote sensing data. We experimentally prove the advantage of generative-competitive neural networks in the construction of forgery remote sensing data.Исследование выполнено при финансовой поддержке РФФИ в рамках научных проектов № 20-37-70053 (параграфы 2.2, 3.1), № 19-07-00138 (параграфы 3.2 и Введение), 18-01-00667 (параграф 2.1), а также Министерства науки и высшего образования РФ в рамках Госзадания ФНИЦ «Кристаллография и фотоника» РАН (параграф 1)

    A Comprehensive Review of Image Restoration and Noise Reduction Techniques

    Get PDF
    Images play a crucial role in modern life and find applications in diverse fields, ranging from preserving memories to conducting scientific research. However, images often suffer from various forms of degradation such as blur, noise, and contrast loss. These degradations make images difficult to interpret, reduce their visual quality, and limit their practical applications. To overcome these challenges, image restoration and noise reduction techniques have been developed to recover degraded images and enhance their quality. These techniques have gained significant importance in recent years, especially with the increasing use of digital imaging in various fields such as medical imaging, surveillance, satellite imaging, and many others. This paper presents a comprehensive review of image restoration and noise reduction techniques, encompassing spatial and frequency domain methods, and deep learning-based techniques. The paper also discusses the evaluation metrics utilized to assess the effectiveness of these techniques and explores future research directions in this field. The primary objective of this paper is to offer a comprehensive understanding of the concepts and methods involved in image restoration and noise reduction

    Predictive World Models from Real-World Partial Observations

    Full text link
    Cognitive scientists believe adaptable intelligent agents like humans perform reasoning through learned causal mental simulations of agents and environments. The problem of learning such simulations is called predictive world modeling. Recently, reinforcement learning (RL) agents leveraging world models have achieved SOTA performance in game environments. However, understanding how to apply the world modeling approach in complex real-world environments relevant to mobile robots remains an open question. In this paper, we present a framework for learning a probabilistic predictive world model for real-world road environments. We implement the model using a hierarchical VAE (HVAE) capable of predicting a diverse set of fully observed plausible worlds from accumulated sensor observations. While prior HVAE methods require complete states as ground truth for learning, we present a novel sequential training method to allow HVAEs to learn to predict complete states from partially observed states only. We experimentally demonstrate accurate spatial structure prediction of deterministic regions achieving 96.21 IoU, and close the gap to perfect prediction by 62% for stochastic regions using the best prediction. By extending HVAEs to cases where complete ground truth states do not exist, we facilitate continual learning of spatial prediction as a step towards realizing explainable and comprehensive predictive world models for real-world mobile robotics applications. Code is available at https://github.com/robin-karlsson0/predictive-world-models.Comment: Accepted for IEEE MOST 202

    Color Sparse Representations for Image Processing: Review, Models, and Prospects

    Get PDF
    International audienceSparse representations have been extended to deal with color images composed of three channels. A review of dictionary-learning-based sparse representations for color images is made here, detailing the differences between the models, and comparing their results on real data and simulated data. These models are considered in a unifying framework that is based on the degrees of freedom of the linear filtering/transformation of the color channels. Moreover, this allows it to be shown that the scalar quaternionic linear model is equivalent to constrained matrix-based color filtering, which highlights the filtering implicitly applied through this model. Based on this reformulation, the new color filtering model is introduced, using unconstrained filters. In this model, spatial morphologies of color images are encoded by atoms, and colors are encoded by color filters. Color variability is no longer captured in increasing the dictionary size, but with color filters, this gives an efficient color representation

    Fusing spatial and temporal components for real-time depth data enhancement of dynamic scenes

    Get PDF
    The depth images from consumer depth cameras (e.g., structured-light/ToF devices) exhibit a substantial amount of artifacts (e.g., holes, flickering, ghosting) that needs to be removed for real-world applications. Existing methods cannot entirely remove them and perform slow. This thesis proposes a new real-time spatio-temporal depth image enhancement filter that completely removes flickering and ghosting, and significantly reduces holes. This thesis also presents a novel depth-data capture setup and two data reduction methods to optimize the performance of the proposed enhancement method
    corecore