4 research outputs found
Progressive Conservative Adaptation for Evolving Target Domains
Conventional domain adaptation typically transfers knowledge from a source
domain to a stationary target domain. However, in many real-world cases, target
data usually emerge sequentially and have continuously evolving distributions.
Restoring and adapting to such target data results in escalating computational
and resource consumption over time. Hence, it is vital to devise algorithms to
address the evolving domain adaptation (EDA) problem, \emph{i.e.,} adapting
models to evolving target domains without access to historic target domains. To
achieve this goal, we propose a simple yet effective approach, termed
progressive conservative adaptation (PCAda). To manage new target data that
diverges from previous distributions, we fine-tune the classifier head based on
the progressively updated class prototypes. Moreover, as adjusting to the most
recent target domain can interfere with the features learned from previous
target domains, we develop a conservative sparse attention mechanism. This
mechanism restricts feature adaptation within essential dimensions, thus easing
the inference related to historical knowledge. The proposed PCAda is
implemented with a meta-learning framework, which achieves the fast adaptation
of the classifier with the help of the progressively updated class prototypes
in the inner loop and learns a generalized feature without severely interfering
with the historic knowledge via the conservative sparse attention in the outer
loop. Experiments on Rotated MNIST, Caltran, and Portraits datasets demonstrate
the effectiveness of our method.Comment: 7 pages, 5 figure
Electrochemical nitrate removal by magnetically immobilized nZVI anode on ammonia-oxidizing plate of RuO2–IrO2/Ti
Ammonium as the major reduction intermediate has always been the limitation of nitrate reduction by cathodic reduction or nano zero-valent iron (nZVI). In this work, we report the electrochemical nitrate removal by magnetically immobilized nZVI anode on RuO2–IrO2/Ti plate with ammonia-oxidizing function. This system shows maximum nitrate removal efficiency of 94.6% and nitrogen selectivity up to 72.8% at pH of 3.0, and it has also high nitrate removal efficiency (90.2%) and nitrogen selectivity (70.6%) near neutral medium (pH = 6). As the increase of the applied anodic potentials, both nitrate removal efficiency (from 27.2% to 94.6%) and nitrogen selectivity (70.4%–72.8%) increase. The incorpration of RuO2–IrO2/Ti plate with ammonia-oxidizing function on the nZVI anode enhances the nitrate reduction. The dosage of nZVI on RuO2–IrO2/Ti plate (from 0.2 g to 0.6 g) has a slight effect (the variance is no more than 10.0%) on the removal performance. Cyclic voltammetry, Tafel analysis and electrochemical impedance spectroscopy (EIS) were further used to investigate the reaction mechanisms occurring on the nZVI surfaces in terms of CV curve area, corrosion voltage, corrosion current density and charge-transfer resistance. In conclusion, high nitrate removal performance of magnetically immobilized nZVI anode coupled with RuO2–IrO2/Ti plate may guide the design of improved electrochemical reduction by nZVI-based anode for practical nitrate remediation
Identity-Preserving Talking Face Generation with Landmark and Appearance Priors
Generating talking face videos from audio attracts lots of research interest.
A few person-specific methods can generate vivid videos but require the target
speaker's videos for training or fine-tuning. Existing person-generic methods
have difficulty in generating realistic and lip-synced videos while preserving
identity information. To tackle this problem, we propose a two-stage framework
consisting of audio-to-landmark generation and landmark-to-video rendering
procedures. First, we devise a novel Transformer-based landmark generator to
infer lip and jaw landmarks from the audio. Prior landmark characteristics of
the speaker's face are employed to make the generated landmarks coincide with
the facial outline of the speaker. Then, a video rendering model is built to
translate the generated landmarks into face images. During this stage, prior
appearance information is extracted from the lower-half occluded target face
and static reference images, which helps generate realistic and
identity-preserving visual content. For effectively exploring the prior
information of static reference images, we align static reference images with
the target face's pose and expression based on motion fields. Moreover,
auditory features are reused to guarantee that the generated face images are
well synchronized with the audio. Extensive experiments demonstrate that our
method can produce more realistic, lip-synced, and identity-preserving videos
than existing person-generic talking face generation methods.Comment: CVPR2023, Code: https://github.com/Weizhi-Zhong/IP_LA