391 research outputs found
Consistent Prompting for Rehearsal-Free Continual Learning
Continual learning empowers models to adapt autonomously to the ever-changing
environment or data streams without forgetting old knowledge. Prompt-based
approaches are built on frozen pre-trained models to learn the task-specific
prompts and classifiers efficiently. Existing prompt-based methods are
inconsistent between training and testing, limiting their effectiveness. Two
types of inconsistency are revealed. Test predictions are made from all
classifiers while training only focuses on the current task classifier without
holistic alignment, leading to Classifier inconsistency. Prompt inconsistency
indicates that the prompt selected during testing may not correspond to the one
associated with this task during training. In this paper, we propose a novel
prompt-based method, Consistent Prompting (CPrompt), for more aligned training
and testing. Specifically, all existing classifiers are exposed to prompt
training, resulting in classifier consistency learning. In addition, prompt
consistency learning is proposed to enhance prediction robustness and boost
prompt selection accuracy. Our Consistent Prompting surpasses its prompt-based
counterparts and achieves state-of-the-art performance on multiple continual
learning benchmarks. Detailed analysis shows that improvements come from more
consistent training and testing.Comment: Accepted by CVPR202
On the Trustworthiness Landscape of State-of-the-art Generative Models: A Comprehensive Survey
Diffusion models and large language models have emerged as leading-edge
generative models and have sparked a revolutionary impact on various aspects of
human life. However, the practical implementation of these models has also
exposed inherent risks, highlighting their dual nature and raising concerns
regarding their trustworthiness. Despite the abundance of literature on this
subject, a comprehensive survey specifically delving into the intersection of
large-scale generative models and their trustworthiness remains largely absent.
To bridge this gap, This paper investigates both the long-standing and emerging
threats associated with these models across four fundamental dimensions:
privacy, security, fairness, and responsibility. In this way, we construct an
extensive map outlining the trustworthiness of these models, while also
providing practical recommendations and identifying future directions. These
efforts are crucial for promoting the trustworthy deployment of these models,
ultimately benefiting society as a whole.Comment: draft versio
Microwave assistant synthesis of trans-4-nitrostilbene derivatives in solvent free condition
A general method for the synthesis of trans-4-nitrostilbenes has been developed. The trans-4-nitrostilbene could be synthesized in good yields under microwave irradiation within 10 min through Perkin reaction by using 4-nitrophenylacetic acid, benzaldehydes and pyrrolidine
Cross-sectional optimization of cold-formed steel channels to Eurocode 3
Cold-formed steel structural systems are widely used in modern construction. However, identifying optimal cross section geometries for cold-formed steel elements is a complex problem, since the strength of these members is controlled by combinations of local, distortional, and global buckling. This paper presents a procedure to obtain optimized steel channel cross-sections for use in compression or bending. A simple lipped C-shape is taken as a starting point, but the optimization process allows for the addition of double-fold (return) lips, inclined lips and triangular web stiffeners. The cross-sections are optimized with respect to their structural capacity, determined according to the relevant Eurocode (EN1993-1-3), using genetic algorithms. All plate slenderness limit values and all limits on the relative dimensions of the cross-sectional components, set by the Eurocode, are thereby taken into account as constraints on the optimization problem. The optimization for compression is carried out for different column lengths and includes the effects of the shift of the effective centroid induced by local buckling. Detailed finite element models are used to confirm the relative gains in capacity obtained through the optimization process
2-HydrÂoxy-1-methoxyxanthen-9-one monohydrate
In the title compound, C14H10O4·H2O, isolated from the roots of Calophyllum membranaceum, the xanthene ring system is almost planar (r.m.s. deviation = 0.008 Å). In the crystal structure, interÂmolecular O—H⋯O and O—H⋯(O,O) hydrogen bonds connect the molÂecules
On the Robustness of Split Learning against Adversarial Attacks
Split learning enables collaborative deep learning model training while
preserving data privacy and model security by avoiding direct sharing of raw
data and model details (i.e., sever and clients only hold partial sub-networks
and exchange intermediate computations). However, existing research has mainly
focused on examining its reliability for privacy protection, with little
investigation into model security. Specifically, by exploring full models,
attackers can launch adversarial attacks, and split learning can mitigate this
severe threat by only disclosing part of models to untrusted servers.This paper
aims to evaluate the robustness of split learning against adversarial attacks,
particularly in the most challenging setting where untrusted servers only have
access to the intermediate layers of the model.Existing adversarial attacks
mostly focus on the centralized setting instead of the collaborative setting,
thus, to better evaluate the robustness of split learning, we develop a
tailored attack called SPADV, which comprises two stages: 1) shadow model
training that addresses the issue of lacking part of the model and 2) local
adversarial attack that produces adversarial examples to evaluate.The first
stage only requires a few unlabeled non-IID data, and, in the second stage,
SPADV perturbs the intermediate output of natural samples to craft the
adversarial ones. The overall cost of the proposed attack process is relatively
low, yet the empirical attack effectiveness is significantly high,
demonstrating the surprising vulnerability of split learning to adversarial
attacks.Comment: accepted by ECAI 2023, camera-ready versio
- …