608 research outputs found
Plasma Clusterin and the CLU Gene rs11136000 Variant Are Associated with Mild Cognitive Impairment in Type 2 Diabetic Patients
Objective: Type 2 diabetes mellitus (T2DM) is related to an elevated risk of mild cognitive impairment (MCI). Plasma clusterin is reported associated with the early pathology of Alzheimer's disease (AD) and longitudinal brain atrophy in subjects with MCI. The rs11136000 single nucleotide polymorphism within the clusterin (CLU) gene is also associated with the risk of AD. We aimed to investigate the associations among plasma clusterin, rs11136000 genotype and T2DM-associated MCI. Methods: A total of 231 T2DM patients, including 126 MCI and 105 cognitively healthy controls were enrolled in this study. Demographic parameters were collected and neuropsychological tests were conducted. Plasma clusterin and CLU rs11136000 genotype were examined.Results: Plasma clusterin was significantly higher in MCI patients than in control group (p=0.007). In subjects with MCI, plasma clusterin level was negatively correlated with Montreal cognitive assessment and auditory verbal learning test_delayed recall scores (p=0.027 and p=0.020, respectively). After adjustment for age, educational attainment, and gender, carriers of rs11136000 TT genotype demonstrated reduced risk for MCI compared with the CC genotype carriers (OR=0.158, χ2=4.113, p=0.043). Multivariable regression model showed that educational attainment, duration of diabetes, HDL-c, and plasma clusterin levels are associated with MCI in T2DM patients.Conclusions: Plasma clusterin was associated with MCI and may reflect a protective response in T2DM patients. TT genotype exhibited a reduced risk of MCI compared to CC genotype. Further investigations should be conducted to determine the role of clusterin in cognitive decline
PixelFolder: An Efficient Progressive Pixel Synthesis Network for Image Generation
Pixel synthesis is a promising research paradigm for image generation, which
can well exploit pixel-wise prior knowledge for generation. However, existing
methods still suffer from excessive memory footprint and computation overhead.
In this paper, we propose a progressive pixel synthesis network towards
efficient image generation, coined as PixelFolder. Specifically, PixelFolder
formulates image generation as a progressive pixel regression problem and
synthesizes images by a multi-stage paradigm, which can greatly reduce the
overhead caused by large tensor transformations. In addition, we introduce
novel pixel folding operations to further improve model efficiency while
maintaining pixel-wise prior knowledge for end-to-end regression. With these
innovative designs, we greatly reduce the expenditure of pixel synthesis, e.g.,
reducing 90% computation and 57% parameters compared to the latest pixel
synthesis method called CIPS. To validate our approach, we conduct extensive
experiments on two benchmark datasets, namely FFHQ and LSUN Church. The
experimental results show that with much less expenditure, PixelFolder obtains
new state-of-the-art (SOTA) performance on two benchmark datasets, i.e., 3.77
FID and 2.45 FID on FFHQ and LSUN Church, respectively. Meanwhile, PixelFolder
is also more efficient than the SOTA methods like StyleGAN2, reducing about 74%
computation and 36% parameters, respectively. These results greatly validate
the effectiveness of the proposed PixelFolder.Comment: 11 pages, 7 figure
Shadow-Aware Dynamic Convolution for Shadow Removal
With a wide range of shadows in many collected images, shadow removal has
aroused increasing attention since uncontaminated images are of vital
importance for many downstream multimedia tasks. Current methods consider the
same convolution operations for both shadow and non-shadow regions while
ignoring the large gap between the color mappings for the shadow region and the
non-shadow region, leading to poor quality of reconstructed images and a heavy
computation burden. To solve this problem, this paper introduces a novel
plug-and-play Shadow-Aware Dynamic Convolution (SADC) module to decouple the
interdependence between the shadow region and the non-shadow region. Inspired
by the fact that the color mapping of the non-shadow region is easier to learn,
our SADC processes the non-shadow region with a lightweight convolution module
in a computationally cheap manner and recovers the shadow region with a more
complicated convolution module to ensure the quality of image reconstruction.
Given that the non-shadow region often contains more background color
information, we further develop a novel intra-convolution distillation loss to
strengthen the information flow from the non-shadow region to the shadow
region. Extensive experiments on the ISTD and SRD datasets show our method
achieves better performance in shadow removal over many state-of-the-arts. Our
code is available at https://github.com/xuyimin0926/SADC
Towards General Visual-Linguistic Face Forgery Detection
Deepfakes are realistic face manipulations that can pose serious threats to
security, privacy, and trust. Existing methods mostly treat this task as binary
classification, which uses digital labels or mask signals to train the
detection model. We argue that such supervisions lack semantic information and
interpretability. To address this issues, in this paper, we propose a novel
paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses
fine-grained sentence-level prompts as the annotation. Since text annotations
are not available in current deepfakes datasets, VLFFD first generates the
mixed forgery image with corresponding fine-grained prompts via Prompt Forgery
Image Generator (PFIG). Then, the fine-grained mixed data and coarse-grained
original data and is jointly trained with the Coarse-and-Fine Co-training
framework (C2F), enabling the model to gain more generalization and
interpretability. The experiments show the proposed method improves the
existing detection models on several challenging benchmarks. Furthermore, we
have integrated our method with multimodal large models, achieving noteworthy
results that demonstrate the potential of our approach. This integration not
only enhances the performance of our VLFFD paradigm but also underscores the
versatility and adaptability of our method when combined with advanced
multimodal technologies, highlighting its potential in tackling the evolving
challenges of deepfake detection
AQUEOUS LIQUID SOLUTIONS FOR LI-LIQUID BATTERY
poster abstractThe evolvement of Lithium-ion battery industries has begun to carry the industries to step in a new revolution. Consequently, high demand in high energy density batteries in many electronic and electrical appliances, espe-cially energy storage industries been emerged. This new type of batteries has been in extensive research, such as lithium-water battery.
Lithium-water battery is a newly developed battery with lithium as the anode and water as the cathode. Lithium is known as one of the most reac-tive metals in periodic table. Therefore, rigorous reaction will be observed when lithium is reacted with water and hence potentially providing an ex-tremely high energy density. This rigorous reaction can be converted into electrical energy and can be stored in a cell. Lithium-water battery is novel and hence, there is no standardized design.
In this presentation, lithium anode is separated from water by liquid electrolyte and a ceramic solid electrolyte. The glass-ceramic solid electro-lyte which has Li1.3Ti1.7Al0.3(PO4)3 composition plays an important role of the design of this lithium–water battery. The main purpose of the solid electro-lyte is to separate water from lithium, avoiding a dangerous exothermic re-action. Also, the presence of the super-ionic conductor ceramic can provide very high lithium ion conductivity.
The different sizes of solid electrolytes were used in designing Li-liquid battery cell. The effect of the electrolyte size on the voltage of the cell was studied to optimize the cell design. Then, the aqueous solutions containing different chemicals were tested as the liquid cathodes, and their electro-chemical performance were compared to those of the pure DI water. Further results will be presented in the poster presentation
Fine-grained Data Distribution Alignment for Post-Training Quantization
While post-training quantization receives popularity mostly due to its
evasion in accessing the original complete training dataset, its poor
performance also stems from scarce images. To alleviate this limitation, in
this paper, we leverage the synthetic data introduced by zero-shot quantization
with calibration dataset and propose a fine-grained data distribution alignment
(FDDA) method to boost the performance of post-training quantization. The
method is based on two important properties of batch normalization statistics
(BNS) we observed in deep layers of the trained network, (i.e.), inter-class
separation and intra-class incohesion. To preserve this fine-grained
distribution information: 1) We calculate the per-class BNS of the calibration
dataset as the BNS centers of each class and propose a BNS-centralized loss to
force the synthetic data distributions of different classes to be close to
their own centers. 2) We add Gaussian noise into the centers to imitate the
incohesion and propose a BNS-distorted loss to force the synthetic data
distribution of the same class to be close to the distorted centers. By
utilizing these two fine-grained losses, our method manifests the
state-of-the-art performance on ImageNet, especially when both the first and
last layers are quantized to the low-bit. Code is at
\url{https://github.com/zysxmu/FDDA}.Comment: ECCV202
- …