169 research outputs found
Self-Paced Learning: an Implicit Regularization Perspective
Self-paced learning (SPL) mimics the cognitive mechanism of humans and
animals that gradually learns from easy to hard samples. One key issue in SPL
is to obtain better weighting strategy that is determined by minimizer
function. Existing methods usually pursue this by artificially designing the
explicit form of SPL regularizer. In this paper, we focus on the minimizer
function, and study a group of new regularizer, named self-paced implicit
regularizer that is deduced from robust loss function. Based on the convex
conjugacy theory, the minimizer function for self-paced implicit regularizer
can be directly learned from the latent loss function, while the analytic form
of the regularizer can be even known. A general framework (named SPL-IR) for
SPL is developed accordingly. We demonstrate that the learning procedure of
SPL-IR is associated with latent robust loss functions, thus can provide some
theoretical inspirations for its working mechanism. We further analyze the
relation between SPL-IR and half-quadratic optimization. Finally, we implement
SPL-IR to both supervised and unsupervised tasks, and experimental results
corroborate our ideas and demonstrate the correctness and effectiveness of
implicit regularizers.Comment: 12 pages, 3 figure
UCF: Uncovering Common Features for Generalizable Deepfake Detection
Deepfake detection remains a challenging task due to the difficulty of
generalizing to new types of forgeries. This problem primarily stems from the
overfitting of existing detection methods to forgery-irrelevant features and
method-specific patterns. The latter has been rarely studied and not well
addressed by previous works. This paper presents a novel approach to address
the two types of overfitting issues by uncovering common forgery features.
Specifically, we first propose a disentanglement framework that decomposes
image information into three distinct components: forgery-irrelevant,
method-specific forgery, and common forgery features. To ensure the decoupling
of method-specific and common forgery features, a multi-task learning strategy
is employed, including a multi-class classification that predicts the category
of the forgery method and a binary classification that distinguishes the real
from the fake. Additionally, a conditional decoder is designed to utilize
forgery features as a condition along with forgery-irrelevant features to
generate reconstructed images. Furthermore, a contrastive regularization
technique is proposed to encourage the disentanglement of the common and
specific forgery features. Ultimately, we only utilize the common forgery
features for the purpose of generalizable deepfake detection. Extensive
evaluations demonstrate that our framework can perform superior generalization
than current state-of-the-art methods
Adversarial Rademacher Complexity of Deep Neural Networks
Deep neural networks are vulnerable to adversarial attacks. Ideally, a robust
model shall perform well on both the perturbed training data and the unseen
perturbed test data. It is found empirically that fitting perturbed training
data is not hard, but generalizing to perturbed test data is quite difficult.
To better understand adversarial generalization, it is of great interest to
study the adversarial Rademacher complexity (ARC) of deep neural networks.
However, how to bound ARC in multi-layers cases is largely unclear due to the
difficulty of analyzing adversarial loss in the definition of ARC. There have
been two types of attempts of ARC. One is to provide the upper bound of ARC in
linear and one-hidden layer cases. However, these approaches seem hard to
extend to multi-layer cases. Another is to modify the adversarial loss and
provide upper bounds of Rademacher complexity on such surrogate loss in
multi-layer cases. However, such variants of Rademacher complexity are not
guaranteed to be bounds for meaningful robust generalization gaps (RGG). In
this paper, we provide a solution to this unsolved problem. Specifically, we
provide the first bound of adversarial Rademacher complexity of deep neural
networks. Our approach is based on covering numbers. We provide a method to
handle the robustify function classes of DNNs such that we can calculate the
covering numbers. Finally, we provide experiments to study the empirical
implication of our bounds and provide an analysis of poor adversarial
generalization
Sampling-based Fast Gradient Rescaling Method for Highly Transferable Adversarial Attacks
Deep neural networks are known to be vulnerable to adversarial examples
crafted by adding human-imperceptible perturbations to the benign input. After
achieving nearly 100% attack success rates in white-box setting, more focus is
shifted to black-box attacks, of which the transferability of adversarial
examples has gained significant attention. In either case, the common
gradient-based methods generally use the sign function to generate
perturbations on the gradient update, that offers a roughly correct direction
and has gained great success. But little work pays attention to its possible
limitation. In this work, we observe that the deviation between the original
gradient and the generated noise may lead to inaccurate gradient update
estimation and suboptimal solutions for adversarial transferability. To this
end, we propose a Sampling-based Fast Gradient Rescaling Method (S-FGRM).
Specifically, we use data rescaling to substitute the sign function without
extra computational cost. We further propose a Depth First Sampling method to
eliminate the fluctuation of rescaling and stabilize the gradient update. Our
method could be used in any gradient-based attacks and is extensible to be
integrated with various input transformation or ensemble methods to further
improve the adversarial transferability. Extensive experiments on the standard
ImageNet dataset show that our method could significantly boost the
transferability of gradient-based attacks and outperform the state-of-the-art
baselines.Comment: 10 pages, 6 figures, 7 tables. arXiv admin note: substantial text
overlap with arXiv:2204.0288
Enhancing Fine-Tuning Based Backdoor Defense with Sharpness-Aware Minimization
Backdoor defense, which aims to detect or mitigate the effect of malicious
triggers introduced by attackers, is becoming increasingly critical for machine
learning security and integrity. Fine-tuning based on benign data is a natural
defense to erase the backdoor effect in a backdoored model. However, recent
studies show that, given limited benign data, vanilla fine-tuning has poor
defense performance. In this work, we provide a deep study of fine-tuning the
backdoored model from the neuron perspective and find that backdoorrelated
neurons fail to escape the local minimum in the fine-tuning process. Inspired
by observing that the backdoorrelated neurons often have larger norms, we
propose FTSAM, a novel backdoor defense paradigm that aims to shrink the norms
of backdoor-related neurons by incorporating sharpness-aware minimization with
fine-tuning. We demonstrate the effectiveness of our method on several
benchmark datasets and network architectures, where it achieves
state-of-the-art defense performance. Overall, our work provides a promising
avenue for improving the robustness of machine learning models against backdoor
attacks
Comparison of long-term radial artery occlusion following trans-radial coronary intervention using 6-french versus 7-french sheaths
Background: The aim of this study was to explore the impact of 6-Fr and 7-Fr sheaths on the incidenceof long-term radial artery occlusion (RAO) after trans-radial coronary intervention (TRI).Methods: From September 2013 to January 2016, patients with ischemic heart disease includingacute myocardial infarction and true bifurcation lesions were randomly assigned to 6-Fr group and7-Fr group immediately after coronary angiography in a 1:1 ratio. The radial artery diameters wereobserved by ultrasound examination one day prior to TRI as well as at 30 days and 1 year after TRI.The primary endpoint was the incidence of RAO at 1-year after TRI. The secondary endpoints were theincidence of local vascular complications during hospitalization and changes of radial artery diameterswithin 1-year after TRI between the two groups. Additionally, multivariate logistic regression analysiswas used to explore potential factors related to the incidence of long-term RAO after TRI.Results: A total of 214 patients were enrolled and randomly assigned to 6-Fr group (n = 105) or7-Fr group (n = 109). There was no significant difference in the incidence of RAO at 1-year after TRI(8.57% vs. 12.84%, p = 0.313). Moreover, no significant difference was observed in the incidence of localvascular complications during hospitalization (20% vs. 24.77%, p = 0.403). After 1-year follow-up,no significant difference was found in radial artery diameters (2.63 Ā± 0.31 mm vs. 2.64 Ā± 0.27 mm,p = 0.802). Multivariate logistic analysis revealed that repeated TRI was an independent risk factor oflong-term RAO 1 year after TRI (OR = 10.316, 95% CI 2.928ā36.351, p = 0.001).Conclusions: Compared to 6-Fr sheath, 7-Fr sheath did not increase short-term or long-term incidenceof RAO after TRI
PPARs and the Cardiovascular System
Abstract Peroxisome proliferator-activated receptors (PPARs) belong to the nuclear hormone-receptor superfamily. Originally cloned in 1990, PPARs were found to be mediators of pharmacologic agents that induce hepatocyte peroxisome proliferation. PPARs also are expressed in cells of the cardiovascular system. PPARĪ³ appears to be highly expressed during atherosclerotic lesion formation, suggesting that increased PPARĪ³ expression may be a vascular compensatory response. Also, ligand-activated PPARĪ³ decreases the inflammatory response in cardiovascular cells, particularly in endothelial cells. PPARĪ±, similar to PPARĪ³, also has pleiotropic effects in the cardiovascular system, including antiinflammatory and antiatherosclerotic properties. PPARĪ± activation inhibits vascular smooth muscle proinflammatory responses, attenuating the development of atherosclerosis. However, PPARĪ“ overexpression may lead to elevated macrophage inflammation and atherosclerosis. Conversely, PPARĪ“ ligands are shown to attenuate the pathogenesis of atherosclerosis by improving endothelial cell proliferation and survival while decreasing endothelial cell inflammation and vascular smooth muscle cell proliferation. Furthermore, the administration of PPAR ligands in the form of TZDs and fibrates has been disappointing in terms of markedly reducing cardiovascular events in the clinical setting. Therefore, a better understanding of PPAR-dependent and -independent signaling will provide the foundation for future research on the role of PPARs in human cardiovascular biology. Antioxid. Redox Signal. 11, 1415-1452.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/78115/1/ars.2008.2280.pd
- ā¦