79 research outputs found
Unsupervised Hierarchical Domain Adaptation for Adverse Weather Optical Flow
Optical flow estimation has made great progress, but usually suffers from
degradation under adverse weather. Although semi/full-supervised methods have
made good attempts, the domain shift between the synthetic and real adverse
weather images would deteriorate their performance. To alleviate this issue,
our start point is to unsupervisedly transfer the knowledge from source clean
domain to target degraded domain. Our key insight is that adverse weather does
not change the intrinsic optical flow of the scene, but causes a significant
difference for the warp error between clean and degraded images. In this work,
we propose the first unsupervised framework for adverse weather optical flow
via hierarchical motion-boundary adaptation. Specifically, we first employ
image translation to construct the transformation relationship between clean
and degraded domains. In motion adaptation, we utilize the flow consistency
knowledge to align the cross-domain optical flows into a motion-invariance
common space, where the optical flow from clean weather is used as the
guidance-knowledge to obtain a preliminary optical flow for adverse weather.
Furthermore, we leverage the warp error inconsistency which measures the motion
misalignment of the boundary between the clean and degraded domains, and
propose a joint intra- and inter-scene boundary contrastive adaptation to
refine the motion boundary. The hierarchical motion and boundary adaptation
jointly promotes optical flow in a unified framework. Extensive quantitative
and qualitative experiments have been performed to verify the superiority of
the proposed method
Jointly Optimizing Image Compression with Low-light Image Enhancement
Learning-based image compression methods have made great progress. Most of
them are designed for generic natural images. In fact, low-light images
frequently occur due to unavoidable environmental influences or technical
limitations, such as insufficient lighting or limited exposure time. %When
general-purpose image compression algorithms compress low-light images, useful
detail information is lost, resulting in a dramatic decrease in image
enhancement. Once low-light images are compressed by existing general image
compression approaches, useful information(e.g., texture details) would be lost
resulting in a dramatic performance decrease in low-light image enhancement. To
simultaneously achieve a higher compression rate and better enhancement
performance for low-light images, we propose a novel image compression
framework with joint optimization of low-light image enhancement. We design an
end-to-end trainable two-branch architecture with lower computational cost,
which includes the main enhancement branch and the signal-to-noise ratio~(SNR)
aware branch. Experimental results show that our proposed joint optimization
framework achieves a significant improvement over existing ``Compress before
Enhance" or ``Enhance before Compress" sequential solutions for low-light
images. Source codes are included in the supplementary material.Comment: arXiv admin note: text overlap with arXiv:2303.06705 by other author
From Sky to the Ground: A Large-scale Benchmark and Simple Baseline Towards Real Rain Removal
Learning-based image deraining methods have made great progress. However, the
lack of large-scale high-quality paired training samples is the main bottleneck
to hamper the real image deraining (RID). To address this dilemma and advance
RID, we construct a Large-scale High-quality Paired real rain benchmark
(LHP-Rain), including 3000 video sequences with 1 million high-resolution
(1920*1080) frame pairs. The advantages of the proposed dataset over the
existing ones are three-fold: rain with higher-diversity and larger-scale,
image with higher-resolution and higher-quality ground-truth. Specifically, the
real rains in LHP-Rain not only contain the classical rain
streak/veiling/occlusion in the sky, but also the \textbf{splashing on the
ground} overlooked by deraining community. Moreover, we propose a novel robust
low-rank tensor recovery model to generate the GT with better separating the
static background from the dynamic rain. In addition, we design a simple
transformer-based single image deraining baseline, which simultaneously utilize
the self-attention and cross-layer attention within the image and rain layer
with discriminative feature representation. Extensive experiments verify the
superiority of the proposed dataset and deraining method over state-of-the-art.Comment: Accepted by ICCV 202
Unsupervised Deraining: Where Contrastive Learning Meets Self-similarity
Image deraining is a typical low-level image restoration task, which aims at
decomposing the rainy image into two distinguishable layers: the clean image
layer and the rain layer. Most of the existing learning-based deraining methods
are supervisedly trained on synthetic rainy-clean pairs. The domain gap between
the synthetic and real rains makes them less generalized to different real
rainy scenes. Moreover, the existing methods mainly utilize the property of the
two layers independently, while few of them have considered the mutually
exclusive relationship between the two layers. In this work, we propose a novel
non-local contrastive learning (NLCL) method for unsupervised image deraining.
Consequently, we not only utilize the intrinsic self-similarity property within
samples but also the mutually exclusive property between the two layers, so as
to better differ the rain layer from the clean image. Specifically, the
non-local self-similarity image layer patches as the positives are pulled
together and similar rain layer patches as the negatives are pushed away. Thus
the similar positive/negative samples that are close in the original space
benefit us to enrich more discriminative representation. Apart from the
self-similarity sampling strategy, we analyze how to choose an appropriate
feature encoder in NLCL. Extensive experiments on different real rainy datasets
demonstrate that the proposed method obtains state-of-the-art performance in
real deraining.Comment: 10 pages, 10 figures, accept to 2022CVP
Unsupervised Deraining: Where Asymmetric Contrastive Learning Meets Self-similarity
Most of the existing learning-based deraining methods are supervisedly
trained on synthetic rainy-clean pairs. The domain gap between the synthetic
and real rain makes them less generalized to complex real rainy scenes.
Moreover, the existing methods mainly utilize the property of the image or rain
layers independently, while few of them have considered their mutually
exclusive relationship. To solve above dilemma, we explore the intrinsic
intra-similarity within each layer and inter-exclusiveness between two layers
and propose an unsupervised non-local contrastive learning (NLCL) deraining
method. The non-local self-similarity image patches as the positives are
tightly pulled together, rain patches as the negatives are remarkably pushed
away, and vice versa. On one hand, the intrinsic self-similarity knowledge
within positive/negative samples of each layer benefits us to discover more
compact representation; on the other hand, the mutually exclusive property
between the two layers enriches the discriminative decomposition. Thus, the
internal self-similarity within each layer (similarity) and the external
exclusive relationship of the two layers (dissimilarity) serving as a generic
image prior jointly facilitate us to unsupervisedly differentiate the rain from
clean image. We further discover that the intrinsic dimension of the non-local
image patches is generally higher than that of the rain patches. This motivates
us to design an asymmetric contrastive loss to precisely model the compactness
discrepancy of the two layers for better discriminative decomposition. In
addition, considering that the existing real rain datasets are of low quality,
either small scale or downloaded from the internet, we collect a real
large-scale dataset under various rainy kinds of weather that contains
high-resolution rainy images.Comment: 16 pages, 15 figures. arXiv admin note: substantial text overlap with
arXiv:2203.1150
Global and local exploitation for saliency using bagâofâwords
The guidance of attention helps human vision system to detect objects rapidly. In this study, the authors present a new saliency detection algorithm by using bagâofâwords (BOW) representation. The authors regard salient regions as coming from globally rare features and regions locally differ from their surroundings. Our approach consists of three stages: first, calculate global rarity of visual words. A vocabulary, a group of visual words, is generated from the given image and a rarity factor for each visual word is introduced according to its occurrence. Second, calculate local contrast. Representations of local patch are achieved from the histograms of words. Then, local contrast is computed by the difference between the two BOW histograms of a patch and its surroundings. Finally, saliency is measured by the combination of global rarity and local patch contrast. We compare our model with the previous methods on natural images, and experimental results demonstrate good performance of our model and fair consistency with human eye fixations
Nonlinear Deblurring for Low-Light Saturated Image
Single image deblurring has achieved significant progress for natural daytime images. Saturation is a common phenomenon in blurry images, due to the low light conditions and long exposure times. However, conventional linear deblurring methods usually deal with natural blurry images well but result in severe ringing artifacts when recovering low-light saturated blurry images. To solve this problem, we formulate the saturation deblurring problem as a nonlinear model, in which all the saturated and unsaturated pixels are modeled adaptively. Specifically, we additionally introduce a nonlinear function to the convolution operator to accommodate the procedure of the saturation in the presence of the blurring. The proposed method has two advantages over previous methods. On the one hand, the proposed method achieves the same high quality of restoring the natural image as seen in conventional deblurring methods, while also reducing the estimation errors in saturated areas and suppressing ringing artifacts. On the other hand, compared with the recent saturated-based deblurring methods, the proposed method captures the formation of unsaturated and saturated degradations straightforwardly rather than with cumbersome and error-prone detection steps. Note that, this nonlinear degradation model can be naturally formulated into a maximum-a posterioriframework, and can be efficiently decoupled into several solvable sub-problems via the alternating direction method of multipliers (ADMM). Experimental results on both synthetic and real-world images demonstrate that the proposed deblurring algorithm outperforms the state-of-the-art low-light saturation-based deblurring methods
Study on Flexural Fatigue Properties of POM Fiber Airport Pavement Concrete
Polyoxymethylene (POM) fiber is a new polymer fiber with the potential to improve the performance of airport pavement concrete. The effect of POM fiber on the flexural fatigue properties of concrete is an important issue in its application for airport pavement concrete. In this study, four-point flexural fatigue experiments were conducted using ordinary performance concrete (OPC) and POM fiber airport pavement concrete (PFAPC) with fiber volume contents of 0.6% and 1.2%, at four stress levels, to examine the flexural fatigue characteristics of these materials. A two-parameter Weibull distribution test of flexural fatigue life was performed, after examining the change in flexural fatigue deformation using the cycle ratio (n/N). A flexural fatigue life equation was then constructed considering various failure probabilities (survival rate). The results show that POM fiber had no discernible impact on the static load strength of airport pavement concrete, and the difference between PFAPC and OPC in terms of static load strength was less than 5%. POM fiber can substantially increase the flexural fatigue deformation capacity of airport pavement concrete by almost 100%, but POM fiber had a different degree of detrimental impact on the fatigue life of airport pavement concrete compared to OPC, with a maximum decrease of 85%. The fatigue lives of OPC and PFAPC adhered to the two-parameter Weibull distribution, the single- and double-log fatigue equations considering various failure probabilities had a high fitting degree based on the two-parameter Weibull distribution, and their R2 was essentially over 0.90. The ultimate fatigue strength of PFAPC was roughly 4% lower than that of OPC. This study on the flexural fatigue properties of POM fiber airport pavement concrete has apparent research value for the extension of POM fiber to the construction of long-life airport pavements
- âŠ