43 research outputs found
Holistic Attention-Fusion Adversarial Network for Single Image Defogging
Adversarial learning-based image defogging methods have been extensively
studied in computer vision due to their remarkable performance. However, most
existing methods have limited defogging capabilities for real cases because
they are trained on the paired clear and synthesized foggy images of the same
scenes. In addition, they have limitations in preserving vivid color and rich
textual details in defogging. To address these issues, we develop a novel
generative adversarial network, called holistic attention-fusion adversarial
network (HAAN), for single image defogging. HAAN consists of a Fog2Fogfree
block and a Fogfree2Fog block. In each block, there are three learning-based
modules, namely, fog removal, color-texture recovery, and fog synthetic, that
are constrained each other to generate high quality images. HAAN is designed to
exploit the self-similarity of texture and structure information by learning
the holistic channel-spatial feature correlations between the foggy image with
its several derived images. Moreover, in the fog synthetic module, we utilize
the atmospheric scattering model to guide it to improve the generative quality
by focusing on an atmospheric light optimization with a novel sky segmentation
network. Extensive experiments on both synthetic and real-world datasets show
that HAAN outperforms state-of-the-art defogging methods in terms of
quantitative accuracy and subjective visual quality.Comment: 13 pages, 10 figure
Accelerating Large Batch Training via Gradient Signal to Noise Ratio (GSNR)
As models for nature language processing (NLP), computer vision (CV) and
recommendation systems (RS) require surging computation, a large number of
GPUs/TPUs are paralleled as a large batch (LB) to improve training throughput.
However, training such LB tasks often meets large generalization gap and
downgrades final precision, which limits enlarging the batch size. In this
work, we develop the variance reduced gradient descent technique (VRGD) based
on the gradient signal to noise ratio (GSNR) and apply it onto popular
optimizers such as SGD/Adam/LARS/LAMB. We carry out a theoretical analysis of
convergence rate to explain its fast training dynamics, and a generalization
analysis to demonstrate its smaller generalization gap on LB training.
Comprehensive experiments demonstrate that VRGD can accelerate training (), narrow generalization gap and improve final accuracy. We push the
batch size limit of BERT pretraining up to 128k/64k and DLRM to 512k without
noticeable accuracy loss. We improve ImageNet Top-1 accuracy at 96k by
than LARS. The generalization gap of BERT and ImageNet training is
significantly reduce by over .Comment: 25 pages, 5 figure
ReFusion: Learning Image Fusion from Reconstruction with Learnable Loss via Meta-Learning
Image fusion aims to combine information from multiple source images into a
single one with more comprehensive informational content. The significant
challenges for deep learning-based image fusion algorithms are the lack of a
definitive ground truth as well as the corresponding distance measurement, with
current manually given loss functions constrain the flexibility of model and
generalizability for unified fusion tasks. To overcome these limitations, we
introduce a unified image fusion framework based on meta-learning, named
ReFusion, which provides a learning paradigm that obtains the optimal fusion
loss for various fusion tasks based on reconstructing the source images.
Compared to existing methods, ReFusion employs a parameterized loss function,
dynamically adjusted by the training framework according to the specific
scenario and task. ReFusion is constituted by three components: a fusion
module, a loss proposal module, and a source reconstruction module. To ensure
the fusion module maximally preserves the information from the source images,
enabling the reconstruction of the source images from the fused image, we adopt
a meta-learning strategy to train the loss proposal module using reconstruction
loss. The update of the fusion module relies on the fusion loss proposed by the
loss proposal module. The alternating updates of the three modules mutually
facilitate each other, aiming to propose an appropriate fusion loss for
different tasks and yield satisfactory fusion results. Extensive experiments
demonstrate that ReFusion is capable of adapting to various tasks, including
infrared-visible, medical, multi-focus, and multi-exposure image fusion. The
code will be released
Modelling underground coal gasification: What to start with
Underground coal gasification (UCG) is widely regarded as a clean coal technology that holds enormous potential to decarbonize the world's coal industry. It converts coal underground into combustible syngas through a set of complex physiochemical events. Experimental and numerical efforts over the past century have contributed to the development of UCG around the world; however, tapping the world's deep-situated coal resources with UCG requires substantial contributions from numerous high-quality researchers. To facilitate effective engagement, this paper will provide a background on where to start if one wishes to undertake UCG modelling. First, a brief description of the fundamental phenomena involved in UCG is given. Then, a succinct introduction of the widely used modelling software is rendered, followed by a description of UCG studies to provide insight how to tune the various software packages for modelling UCG and where their strengths lie. This paper shall serve as guidance to new UCG modellers
Overexpression of the FBA and TPI genes promotes high production of HDMF in Zygosaccharomyces rouxii
4-Hydroxy-2,5-dimethyl-3 (2H)-furanone (HDMF) is widely used in the food industry as a spice and flavoring agent with high market demand. In this study, fructose-1,6-bisphosphate aldolase (FBA) and triose phosphate isomerase (TPI) were overexpressed in Zygosaccharomyces rouxii in the form of single and double genes, respectively, via electroporation. High-yield HDMF-engineered yeast strains were constructed by combining the analysis of gene expression levels obtained by real-time fluorescence quantitative PCR technology and HDMF production measured by HPLC. The results showed that there was a significant positive correlation between the production of HDMF and the expression levels of the FBA and TPI genes in yeast; the expression levels of the FBA and TPI genes were also positively correlated (p < 0.05). Compared with the wild type (WT), the engineered strains F10-D, T17-D, and TF15-A showed marked increases in HDMF production and FBA and TPI gene expression (p < 0.05) and exhibited great genetic stability with no obvious differences in biomass or colony morphology. In addition, the exogenous addition of d-fructose promoted the growth of Z. rouxii. Among the engineered strains, when fermented in YPD media supplemented with d-fructose for 5 days, TF15-A (overexpressing the FBA and TPI genes) generated the highest HDMF production of 13.39 mg/L, which is 1.91 times greater than that of the wild-type strain. The results above indicated that FBA and TPI, which are key enzymes involved in the process of HDMF biosynthesis by Z. rouxii, positively regulate the synthesis of HDMF at the transcriptional level. d-fructose can be used as a precursor for the biosynthesis of HDMF by engineered yeast in industrial production
The IPIN 2019 Indoor Localisation Competition—Description and Results
IPIN 2019 Competition, sixth in a series of IPIN competitions, was held at the CNR Research Area of Pisa (IT), integrated into the program of the IPIN 2019 Conference. It included two on-site real-time Tracks and three off-site Tracks. The four Tracks presented in this paper were set in the same environment, made of two buildings close together for a total usable area of 1000 m 2 outdoors and and 6000 m 2 indoors over three floors, with a total path length exceeding 500 m. IPIN competitions, based on the EvAAL framework, have aimed at comparing the accuracy performance of personal positioning systems in fair and realistic conditions: past editions of the competition were carried in big conference settings, university campuses and a shopping mall. Positioning accuracy is computed while the person carrying the system under test walks at normal walking speed, uses lifts and goes up and down stairs or briefly stops at given points. Results presented here are a showcase of state-of-the-art systems tested side by side in real-world settings as part of the on-site real-time competition Tracks. Results for off-site Tracks allow a detailed and reproducible comparison of the most recent positioning and tracking algorithms in the same environment as the on-site Tracks
Effects of exogenous salicylic acid on alleviation of arsenic-induced oxidative damages in rice
Salicylic acid (SA) is a phenolic phytohormone that plays a vital role in plant development and mediates plant responses to plenty of adversity including arsenic (As) stress. The effects of exogenous addition of SA on As tolerance and As accumulation were assessed in two cultivars of rice (Oryza sativa L.) Nipponbare and Zhongzao 39, hydroponically grown with Kimura B nutrient solution under arsenite [As (III)] and dimethylarsonic acid (DMA) exposure. In the second ex-periment, the influence of soaking seed with SA on As uptake and As damages was investigated in rice (cv. Nipponbare) exposed to As (III) and DMA. The results showed that exogenous addition of SA sig- nificantly decreased the concentrations of hydrogen peroxide (H2O2) and malondialdehyde (MDA) in both As (III)- and DMA-stressed rice, indicating that SA alleviates As-induced oxidative damages in rice. SA increased the activity of antioxidant enzymes and, moreover, increased the relative amount of glutathione (GSH) and ascorbate (ASA) by accelerating the GSH- ASA circle system. Exogenous addition of SA significantly decreased the As concentration in both roots and shoots of rice under As(III) stress by influ- encing the expression of genes encoding As transporters, viz. OsLsi1, OsLsi2. The addition of SA significantly decreased the As content in shoots under DMA stress, which may be related to the expression of OsPTR7 involved in shoot xylem unloading. This finding may foster a novel perspec- tive for reducing As accumulation in rice grains
SKDBERT: Compressing BERT via Stochastic Knowledge Distillation
In this paper, we propose Stochastic Knowledge Distillation (SKD) to obtain compact BERT-style language model dubbed SKDBERT. In each distillation iteration, SKD samples a teacher model from a pre-defined teacher team, which consists of multiple teacher models with multi-level capacities, to transfer knowledge into student model in an one-to-one manner. Sampling distribution plays an important role in SKD. We heuristically present three types of sampling distributions to assign appropriate probabilities for multi-level teacher models. SKD has two advantages: 1) it can preserve the diversities of multi-level teacher models via stochastically sampling single teacher model in each distillation iteration, and 2) it can also improve the efficacy of knowledge distillation via multi-level teacher models when large capacity gap exists between the teacher model and the student model. Experimental results on GLUE benchmark show that SKDBERT reduces the size of a BERT model by 40% while retaining 99.5% performances of language understanding and being 100% faster
An Unsupervised Attentive-Adversarial Learning Framework for Single Image Deraining
Single image deraining has been an important topic in low-level computer
vision tasks. The atmospheric veiling effect (which is generated by rain
accumulation, similar to fog) usually appears with the rain. Most deep
learning-based single image deraining methods mainly focus on rain streak
removal by disregarding this effect, which leads to low-quality deraining
performance. In addition, these methods are trained only on synthetic data,
hence they do not take into account real-world rainy images. To address the
above issues, we propose a novel unsupervised attentive-adversarial learning
framework (UALF) for single image deraining that trains on both synthetic and
real rainy images while simultaneously capturing both rain streaks and rain
accumulation features. UALF consists of a Rain-fog2Clean (R2C) transformation
block and a Clean2Rain-fog (C2R) transformation block. In R2C, to better
characterize the rain-fog fusion feature and to achieve high-quality deraining
performance, we employ an attention rain-fog feature extraction network (ARFE)
to exploit the self-similarity of global and local rain-fog information by
learning the spatial feature correlations. Moreover, to improve the
transformation ability of C2R, we design a rain-fog feature decoupling and
reorganization network (RFDR) by embedding a rainy image degradation model and
a mixed discriminator to preserve richer texture details. Extensive experiments
on benchmark rain-fog and rain datasets show that UALF outperforms
state-of-the-art deraining methods. We also conduct defogging performance
evaluation experiments to further demonstrate the effectiveness of UAL