19 research outputs found

    Machine Learning for Earth Systems Modeling, Analysis and Predictability

    Get PDF
    Artificial intelligence (AI) and machine learning (ML) methods and applications have been continuously explored in many areas of scientific research. While these methods have lead to many advances in climate science, there remains room for growth especially in Earth System Modeling, analysis and predictability. Due to their high computational expense and large volumes of complex data they produce, earth system models (ESMs) provide an abundance of potential for enhancing both our understanding of the climate system as well as improving performance of ESMs themselves using ML techniques. Here I demonstrate 3 specific areas of development using ML: statistical downscaling, predictability using non-linear latent spaces and emulation of complex parametrization. These three areas of research illustrate the ability of innovative ML methods to advance our understanding of climate systems through ESMs. In Aim 1, I present a first application of a fast super resolution convolutional neural network (FSRCNN) based approach for downscaling earth system model (ESM) simulations. We adapt the FSRCNN to improve reconstruction on ESM data, we term the FSRCNN-ESM. We find that FSRCNN-ESM outperforms FSRCNN and other super-resolution methods in reconstructing high resolution images producing finer spatial scale features with better accuracy for surface temperature, surface radiative fluxes and precipitation. In Aim 2, I construct a novel Multi-Input Multi-Output Autoencoder-decoder (MIMO-AE) in an application of multi-task learning to capture the non-linear relationship of Southern California precipitation (SC-PRECIP) and tropical Pacific Ocean sea surface temperature (TP-SST) on monthly time-scales. I find that the MIMO-AE index provides enhanced predictability of SC-PRECIP for a lead-time of up-to four months as compared to Ni{\~n}o 3.4 index and the El Ni{\~n}o Southern Oscillation Longitudinal Index. I also use a MTL method to expand on a convolutional long short term memory (conv-LSTM) to predict Nino 3.4 index by including multiple input variables known to be associated with ENSO, namely sea level pressure (SLP), outgoing longwave radiation (ORL) and surface level zonal winds (U). In Aim 3, I demonstrate the capability of DNNs for learning computationally expensive parameterizations in ESMs. This study develops a DNN to replace the full radiation model in the E3SM

    Target-oriented Domain Adaptation for Infrared Image Super-Resolution

    Full text link
    Recent efforts have explored leveraging visible light images to enrich texture details in infrared (IR) super-resolution. However, this direct adaptation approach often becomes a double-edged sword, as it improves texture at the cost of introducing noise and blurring artifacts. To address these challenges, we propose the Target-oriented Domain Adaptation SRGAN (DASRGAN), an innovative framework specifically engineered for robust IR super-resolution model adaptation. DASRGAN operates on the synergy of two key components: 1) Texture-Oriented Adaptation (TOA) to refine texture details meticulously, and 2) Noise-Oriented Adaptation (NOA), dedicated to minimizing noise transfer. Specifically, TOA uniquely integrates a specialized discriminator, incorporating a prior extraction branch, and employs a Sobel-guided adversarial loss to align texture distributions effectively. Concurrently, NOA utilizes a noise adversarial loss to distinctly separate the generative and Gaussian noise pattern distributions during adversarial training. Our extensive experiments confirm DASRGAN's superiority. Comparative analyses against leading methods across multiple benchmarks and upsampling factors reveal that DASRGAN sets new state-of-the-art performance standards. Code are available at \url{https://github.com/yongsongH/DASRGAN}.Comment: 11 pages, 9 figure

    Multimodal image super-resolution via joint sparse representations induced by coupled dictionaries

    Get PDF
    Real-world data processing problems often involve various image modalities associated with a certain scene, including RGB images, infrared images, or multispectral images. The fact that different image modalities often share certain attributes, such as edges, textures, and other structure primitives, represents an opportunity to enhance various image processing tasks. This paper proposes a new approach to construct a high-resolution (HR) version of a low-resolution (LR) image, given another HR image modality as guidance, based on joint sparse representations induced by coupled dictionaries. The proposed approach captures complex dependency correlations, including similarities and disparities, between different image modalities in a learned sparse feature domain in lieu of the original image domain. It consists of two phases: coupled dictionary learning phase and coupled superresolution phase. The learning phase learns a set of dictionaries from the training dataset to couple different image modalities together in the sparse feature domain. In turn, the super-resolution phase leverages such dictionaries to construct an HR version of the LR target image with another related image modality for guidance. In the advanced version of our approach, multistage strategy and neighbourhood regression concept are introduced to further improve the model capacity and performance. Extensive guided image super-resolution experiments on real multimodal images demonstrate that the proposed approach admits distinctive advantages with respect to the state-of-the-art approaches, for example, overcoming the texture copying artifacts commonly resulting from inconsistency between the guidance and target images. Of particular relevance, the proposed model demonstrates much better robustness than competing deep models in a range of noisy scenarios

    A Novel Domain Transfer-Based Approach for Unsupervised Thermal Image Super-Resolution

    Get PDF
    This paper presents a transfer domain strategy to tackle the limitations of low-resolution thermal sensors and generate higher-resolution images of reasonable quality. The proposed technique employs a CycleGAN architecture and uses a ResNet as an encoder in the generator along with an attention module and a novel loss function. The network is trained on a multi-resolution thermal image dataset acquired with three different thermal sensors. Results report better performance benchmarking results on the 2nd CVPR-PBVS-2021 thermal image super-resolution challenge than state-of-the-art methods. The code of this work is available online
    corecore