59 research outputs found
The Chinese Liberal Camp in Post-June 4th China
This paper is an assessment of Chinese liberal intellectuals in the two decades following June 4th. It provides an analysis of the intellectual development of Chinese liberal intellectuals; their attitudes toward the party-state, economic reform, and globalisation; their political endeavours; and their contributions to the project of constitutional democracy in China
Les libéraux chinois dans la Chine post-Tiananmen
Cet article s’intéresse aux intellectuels libéraux chinois durant les deux décennies qui ont suivi les événements du 4 juin 1989. Il analyse l’évolution de leurs idées, mais aussi de leurs attitudes à l’égard de l’État-Parti, de la réforme économique ou encore de la mondialisation. Il rend compte de leurs efforts pour faire entendre leur voix dans le domaine politique ainsi que de leur contribution à la promotion d’un projet de démocratie constitutionnelle en Chine
Decomposition Ascribed Synergistic Learning for Unified Image Restoration
Learning to restore multiple image degradations within a single model is
quite beneficial for real-world applications. Nevertheless, existing works
typically concentrate on regarding each degradation independently, while their
relationship has been less exploited to ensure the synergistic learning. To
this end, we revisit the diverse degradations through the lens of singular
value decomposition, with the observation that the decomposed singular vectors
and singular values naturally undertake the different types of degradation
information, dividing various restoration tasks into two groups,\ie, singular
vector dominated and singular value dominated. The above analysis renders a
more unified perspective to ascribe the diverse degradations, compared to
previous task-level independent learning. The dedicated optimization of
degraded singular vectors and singular values inherently utilizes the potential
relationship among diverse restoration tasks, attributing to the Decomposition
Ascribed Synergistic Learning (DASL). Specifically, DASL comprises two
effective operators, namely, Singular VEctor Operator (SVEO) and Singular VAlue
Operator (SVAO), to favor the decomposed optimization, which can be lightly
integrated into existing convolutional image restoration backbone. Moreover,
the congruous decomposition loss has been devised for auxiliary. Extensive
experiments on blended five image restoration tasks demonstrate the
effectiveness of our method, including image deraining, image dehazing, image
denoising, image deblurring, and low-light image enhancement.Comment: 13 page
Iterative Prompt Learning for Unsupervised Backlit Image Enhancement
We propose a novel unsupervised backlit image enhancement method, abbreviated
as CLIP-LIT, by exploring the potential of Contrastive Language-Image
Pre-Training (CLIP) for pixel-level image enhancement. We show that the
open-world CLIP prior not only aids in distinguishing between backlit and
well-lit images, but also in perceiving heterogeneous regions with different
luminance, facilitating the optimization of the enhancement network. Unlike
high-level and image manipulation tasks, directly applying CLIP to enhancement
tasks is non-trivial, owing to the difficulty in finding accurate prompts. To
solve this issue, we devise a prompt learning framework that first learns an
initial prompt pair by constraining the text-image similarity between the
prompt (negative/positive sample) and the corresponding image (backlit
image/well-lit image) in the CLIP latent space. Then, we train the enhancement
network based on the text-image similarity between the enhanced result and the
initial prompt pair. To further improve the accuracy of the initial prompt
pair, we iteratively fine-tune the prompt learning framework to reduce the
distribution gaps between the backlit images, enhanced results, and well-lit
images via rank learning, boosting the enhancement performance. Our method
alternates between updating the prompt learning framework and enhancement
network until visually pleasing results are achieved. Extensive experiments
demonstrate that our method outperforms state-of-the-art methods in terms of
visual quality and generalization ability, without requiring any paired data.Comment: Accepted to ICCV 2023 as Oral. Project page:
https://zhexinliang.github.io/CLIP_LIT_page
Adaptive Window Pruning for Efficient Local Motion Deblurring
Local motion blur commonly occurs in real-world photography due to the mixing
between moving objects and stationary backgrounds during exposure. Existing
image deblurring methods predominantly focus on global deblurring,
inadvertently affecting the sharpness of backgrounds in locally blurred images
and wasting unnecessary computation on sharp pixels, especially for
high-resolution images. This paper aims to adaptively and efficiently restore
high-resolution locally blurred images. We propose a local motion deblurring
vision Transformer (LMD-ViT) built on adaptive window pruning Transformer
blocks (AdaWPT). To focus deblurring on local regions and reduce computation,
AdaWPT prunes unnecessary windows, only allowing the active windows to be
involved in the deblurring processes. The pruning operation relies on the
blurriness confidence predicted by a confidence predictor that is trained
end-to-end using a reconstruction loss with Gumbel-Softmax re-parameterization
and a pruning loss guided by annotated blur masks. Our method removes local
motion blur effectively without distorting sharp regions, demonstrated by its
exceptional perceptual and quantitative improvements compared to
state-of-the-art methods. In addition, our approach substantially reduces FLOPs
by 66% and achieves more than a twofold increase in inference speed compared to
Transformer-based deblurring methods. We will make our code and annotated blur
masks publicly available.Comment: 17 page
Empowering Low-Light Image Enhancer through Customized Learnable Priors
Deep neural networks have achieved remarkable progress in enhancing low-light
images by improving their brightness and eliminating noise. However, most
existing methods construct end-to-end mapping networks heuristically,
neglecting the intrinsic prior of image enhancement task and lacking
transparency and interpretability. Although some unfolding solutions have been
proposed to relieve these issues, they rely on proximal operator networks that
deliver ambiguous and implicit priors. In this work, we propose a paradigm for
low-light image enhancement that explores the potential of customized learnable
priors to improve the transparency of the deep unfolding paradigm. Motivated by
the powerful feature representation capability of Masked Autoencoder (MAE), we
customize MAE-based illumination and noise priors and redevelop them from two
perspectives: 1) \textbf{structure flow}: we train the MAE from a normal-light
image to its illumination properties and then embed it into the proximal
operator design of the unfolding architecture; and m2) \textbf{optimization
flow}: we train MAE from a normal-light image to its gradient representation
and then employ it as a regularization term to constrain noise in the model
output. These designs improve the interpretability and representation
capability of the model.Extensive experiments on multiple low-light image
enhancement datasets demonstrate the superiority of our proposed paradigm over
state-of-the-art methods. Code is available at
https://github.com/zheng980629/CUE.Comment: Accepted by ICCV 202
MIPI 2022 Challenge on RGBW Sensor Re-mosaic: Dataset and Report
Developing and integrating advanced image sensors with novel algorithms in
camera systems are prevalent with the increasing demand for computational
photography and imaging on mobile platforms. However, the lack of high-quality
data for research and the rare opportunity for in-depth exchange of views from
industry and academia constrain the development of mobile intelligent
photography and imaging (MIPI). To bridge the gap, we introduce the first MIPI
challenge including five tracks focusing on novel image sensors and imaging
algorithms. In this paper, RGBW Joint Remosaic and Denoise, one of the five
tracks, working on the interpolation of RGBW CFA to Bayer at full resolution,
is introduced. The participants were provided with a new dataset including 70
(training) and 15 (validation) scenes of high-quality RGBW and Bayer pairs. In
addition, for each scene, RGBW of different noise levels was provided at 0dB,
24dB, and 42dB. All the data were captured using an RGBW sensor in both outdoor
and indoor conditions. The final results are evaluated using objective metrics
including PSNR, SSIM, LPIPS, and KLD. A detailed description of all models
developed in this challenge is provided in this paper. More details of this
challenge and the link to the dataset can be found at
https://github.com/mipi-challenge/MIPI2022.Comment: ECCV 2022 Mobile Intelligent Photography and Imaging (MIPI)
Workshop--RGBW Sensor Re-mosaic Challenge Report. MIPI workshop website:
http://mipi-challenge.org/. arXiv admin note: substantial text overlap with
arXiv:2209.07060, arXiv:2209.07530, arXiv:2209.0705
MIPI 2023 Challenge on RGBW Remosaic: Methods and Results
Developing and integrating advanced image sensors with novel algorithms in
camera systems are prevalent with the increasing demand for computational
photography and imaging on mobile platforms. However, the lack of high-quality
data for research and the rare opportunity for an in-depth exchange of views
from industry and academia constrain the development of mobile intelligent
photography and imaging (MIPI). With the success of the 1st MIPI Workshop@ECCV
2022, we introduce the second MIPI challenge, including four tracks focusing on
novel image sensors and imaging algorithms. This paper summarizes and reviews
the RGBW Joint Remosaic and Denoise track on MIPI 2023. In total, 81
participants were successfully registered, and 4 teams submitted results in the
final testing phase. The final results are evaluated using objective metrics,
including PSNR, SSIM, LPIPS, and KLD. A detailed description of the top three
models developed in this challenge is provided in this paper. More details of
this challenge and the link to the dataset can be found at
https://mipi-challenge.org/MIPI2023/.Comment: CVPR 2023 Mobile Intelligent Photography and Imaging (MIPI)
Workshop--RGBW Sensor Remosaic Challenge Report. Website:
https://mipi-challenge.org/MIPI2023/. arXiv admin note: substantial text
overlap with arXiv:2209.08471, arXiv:2209.07060, arXiv:2209.07530,
arXiv:2304.1008
Gay mobile apps and the evolving virtual risk environment: a cross-sectional online survey among men who have sex with men in China
The expansion of gay sex-seeking application (gay app) use among men who have sex with men (MSM) may create new virtual risk environments that are associated with STI transmission. The goals of this study were to compare sexual behaviors between gay app users and non-users, and to describe sexual behaviors among gay app users in China
- …