92 research outputs found
catena-Poly[[diaquacopper(II)]-μ-hydroxido-κ2 O:O-μ-[4-(4H-1,2,4-triazol-4-yl)benzoato]-κ2 N 1:N 2]
The title compound, [Cu(C9H6N3O2)(OH)(H2O)2]n, adopts a chain motif along [010] in which the CuII atoms are bridged by hydroxy groups and 4-(1,2,4-triazol-4-yl)benzoate (tab) ligands. The CuII atom lies on an inversion center and is six-coordinated by two N atoms from two tab ligands, two hydroxy groups and two water molecules, giving a distorted octahedral geometry. The hydroxy group and the tab ligand are located on a mirror plane. One of the water H atoms is disordered over two positions with equal occupancy factors. Intermolecular O—H⋯O hydrogen bonds extend the chains into a layer parallel to (100) and C—H⋯O hydrogen bonds connect the layers into a three-dimensional network
PhysFormer: Facial Video-based Physiological Measurement with Temporal Difference Transformer
Remote photoplethysmography (rPPG), which aims at measuring heart activities
and physiological signals from facial video without any contact, has great
potential in many applications (e.g., remote healthcare and affective
computing). Recent deep learning approaches focus on mining subtle rPPG clues
using convolutional neural networks with limited spatio-temporal receptive
fields, which neglect the long-range spatio-temporal perception and interaction
for rPPG modeling. In this paper, we propose the PhysFormer, an end-to-end
video transformer based architecture, to adaptively aggregate both local and
global spatio-temporal features for rPPG representation enhancement. As key
modules in PhysFormer, the temporal difference transformers first enhance the
quasi-periodic rPPG features with temporal difference guided global attention,
and then refine the local spatio-temporal representation against interference.
Furthermore, we also propose the label distribution learning and a curriculum
learning inspired dynamic constraint in frequency domain, which provide
elaborate supervisions for PhysFormer and alleviate overfitting. Comprehensive
experiments are performed on four benchmark datasets to show our superior
performance on both intra- and cross-dataset testings. One highlight is that,
unlike most transformer networks needed pretraining from large-scale datasets,
the proposed PhysFormer can be easily trained from scratch on rPPG datasets,
which makes it promising as a novel transformer baseline for the rPPG
community. The codes will be released at
https://github.com/ZitongYu/PhysFormer.Comment: Accepted by CVPR202
Benchmarking Joint Face Spoofing and Forgery Detection with Visual and Physiological Cues
Face anti-spoofing (FAS) and face forgery detection play vital roles in
securing face biometric systems from presentation attacks (PAs) and vicious
digital manipulation (e.g., deepfakes). Despite promising performance upon
large-scale data and powerful deep models, the generalization problem of
existing approaches is still an open issue. Most of recent approaches focus on
1) unimodal visual appearance or physiological (i.e., remote
photoplethysmography (rPPG)) cues; and 2) separated feature representation for
FAS or face forgery detection. On one side, unimodal appearance and rPPG
features are respectively vulnerable to high-fidelity face 3D mask and video
replay attacks, inspiring us to design reliable multi-modal fusion mechanisms
for generalized face attack detection. On the other side, there are rich common
features across FAS and face forgery detection tasks (e.g., periodic rPPG
rhythms and vanilla appearance for bonafides), providing solid evidence to
design a joint FAS and face forgery detection system in a multi-task learning
fashion. In this paper, we establish the first joint face spoofing and forgery
detection benchmark using both visual appearance and physiological rPPG cues.
To enhance the rPPG periodicity discrimination, we design a two-branch
physiological network using both facial spatio-temporal rPPG signal map and its
continuous wavelet transformed counterpart as inputs. To mitigate the modality
bias and improve the fusion efficacy, we conduct a weighted batch and layer
normalization for both appearance and rPPG features before multi-modal fusion.
We find that the generalization capacities of both unimodal (appearance or
rPPG) and multi-modal (appearance+rPPG) models can be obviously improved via
joint training on these two tasks. We hope this new benchmark will facilitate
the future research of both FAS and deepfake detection communities.Comment: Accepted by IEEE Transactions on Dependable and Secure Computing
(TDSC). Corresponding authors: Zitong Yu and Wenhan Yan
A Physics-informed Machine Learning-based Control Method for Nonlinear Dynamic Systems with Highly Noisy Measurements
This study presents a physics-informed machine learning-based control method
for nonlinear dynamic systems with highly noisy measurements. Existing
data-driven control methods that use machine learning for system identification
cannot effectively cope with highly noisy measurements, resulting in unstable
control performance. To address this challenge, the present study extends
current physics-informed machine learning capabilities for modeling nonlinear
dynamics with control and integrates them into a model predictive control
framework. To demonstrate the capability of the proposed method we test and
validate with two noisy nonlinear dynamic systems: the chaotic Lorenz 3 system,
and turning machine tool. Analysis of the results illustrate that the proposed
method outperforms state-of-the-art benchmarks as measured by both modeling
accuracy and control performance for nonlinear dynamic systems under high-noise
conditions
Multi-scale Promoted Self-adjusting Correlation Learning for Facial Action Unit Detection
Facial Action Unit (AU) detection is a crucial task in affective computing
and social robotics as it helps to identify emotions expressed through facial
expressions. Anatomically, there are innumerable correlations between AUs,
which contain rich information and are vital for AU detection. Previous methods
used fixed AU correlations based on expert experience or statistical rules on
specific benchmarks, but it is challenging to comprehensively reflect complex
correlations between AUs via hand-crafted settings. There are alternative
methods that employ a fully connected graph to learn these dependencies
exhaustively. However, these approaches can result in a computational explosion
and high dependency with a large dataset. To address these challenges, this
paper proposes a novel self-adjusting AU-correlation learning (SACL) method
with less computation for AU detection. This method adaptively learns and
updates AU correlation graphs by efficiently leveraging the characteristics of
different levels of AU motion and emotion representation information extracted
in different stages of the network. Moreover, this paper explores the role of
multi-scale learning in correlation information extraction, and design a simple
yet effective multi-scale feature learning (MSFL) method to promote better
performance in AU detection. By integrating AU correlation information with
multi-scale features, the proposed method obtains a more robust feature
representation for the final AU detection. Extensive experiments show that the
proposed method outperforms the state-of-the-art methods on widely used AU
detection benchmark datasets, with only 28.7\% and 12.0\% of the parameters and
FLOPs of the best method, respectively. The code for this method is available
at \url{https://github.com/linuxsino/Self-adjusting-AU}.Comment: 13pages, 7 figure
Preliminarily Static Analysis of CFETR Central Solenoid Magnet System
Conceptual design of China Fusion Engineering Test Reactor (CFETR) Central Solenoid (CS) coil had been started in Institute of Plasma Physics, Chinese Academy of Sciences. The highest field of CS coil is 17.2 T when the running current is 60 kA. CS magnet system mainly consists of 8 Nb3Sn coils compressed with 8 sets of preload structure. The functions of the preload structure are to apply an enough axial compression to the CS coils and to have a mechanical rigidity against the repulsive force between 8 Nb3Sn coils. This paper describes structural design of CFETR CS magnet system. A global finite element model is created based on the design geometry data to investigate the mechanical property of CFETR CS preload structure and support structure under the different operating conditions. 2D finite element model under electromagnetic is created to calculate the stress on the conductor jacket and turn insulation.</p
Improving the accuracy of cotton seedling emergence rate estimation by fusing UAV-based multispectral vegetation indices
Timely and accurate estimation of cotton seedling emergence rate is of great significance to cotton production. This study explored the feasibility of drone-based remote sensing in monitoring cotton seedling emergence. The visible and multispectral images of cotton seedlings with 2 - 4 leaves in 30 plots were synchronously obtained by drones. The acquired images included cotton seedlings, bare soil, mulching films, and PE drip tapes. After constructing 17 visible VIs and 14 multispectral VIs, three strategies were used to separate cotton seedlings from the images: (1) Otsu’s thresholding was performed on each vegetation index (VI); (2) Key VIs were extracted based on results of (1), and the Otsu-intersection method and three machine learning methods were used to classify cotton seedlings, bare soil, mulching films, and PE drip tapes in the images; (3) Machine learning models were constructed using all VIs and validated. Finally, the models constructed based on two modeling strategies [Otsu-intersection (OI) and machine learning (Support Vector Machine (SVM), Random Forest (RF), and K-nearest neighbor (KNN)] showed a higher accuracy. Therefore, these models were selected to estimate cotton seedling emergence rate, and the estimates were compared with the manually measured emergence rate. The results showed that multispectral VIs, especially NDVI, RVI, SAVI, EVI2, OSAVI, and MCARI, had higher crop seedling extraction accuracy than visible VIs. After fusing all VIs or key VIs extracted based on Otsu’s thresholding, the binary image purity was greatly improved. Among the fusion methods, the Key VIs-OI and All VIs-KNN methods yielded less noises and small errors, with a RMSE (root mean squared error) as low as 2.69% and a MAE (mean absolute error) as low as 2.15%. Therefore, fusing multiple VIs can increase crop image segmentation accuracy. This study provides a new method for rapidly monitoring crop seedling emergence rate in the field, which is of great significance for the development of modern agriculture
Robust estimation of bacterial cell count from optical density
Optical density (OD) is widely used to estimate the density of cells in liquid culture, but cannot be compared between instruments without a standardized calibration protocol and is challenging to relate to actual cell count. We address this with an interlaboratory study comparing three simple, low-cost, and highly accessible OD calibration protocols across 244 laboratories, applied to eight strains of constitutive GFP-expressing E. coli. Based on our results, we recommend calibrating OD to estimated cell count using serial dilution of silica microspheres, which produces highly precise calibration (95.5% of residuals <1.2-fold), is easily assessed for quality control, also assesses instrument effective linear range, and can be combined with fluorescence calibration to obtain units of Molecules of Equivalent Fluorescein (MEFL) per cell, allowing direct comparison and data fusion with flow cytometry measurements: in our study, fluorescence per cell measurements showed only a 1.07-fold mean difference between plate reader and flow cytometry data
- …