209 research outputs found
Knowledge Restore and Transfer for Multi-label Class-Incremental Learning
Current class-incremental learning research mainly focuses on single-label
classification tasks while multi-label class-incremental learning (MLCIL) with
more practical application scenarios is rarely studied. Although there have
been many anti-forgetting methods to solve the problem of catastrophic
forgetting in class-incremental learning, these methods have difficulty in
solving the MLCIL problem due to label absence and information dilution. In
this paper, we propose a knowledge restore and transfer (KRT) framework for
MLCIL, which includes a dynamic pseudo-label (DPL) module to restore the old
class knowledge and an incremental cross-attention(ICA) module to save
session-specific knowledge and transfer old class knowledge to the new model
sufficiently. Besides, we propose a token loss to jointly optimize the
incremental cross-attention module. Experimental results on MS-COCO and PASCAL
VOC datasets demonstrate the effectiveness of our method for improving
recognition performance and mitigating forgetting on multi-label
class-incremental learning tasks
Assessing Prompt Injection Risks in 200+ Custom GPTs
In the rapidly evolving landscape of artificial intelligence, ChatGPT has
been widely used in various applications. The new feature: customization of
ChatGPT models by users to cater to specific needs has opened new frontiers in
AI utility. However, this study reveals a significant security vulnerability
inherent in these user-customized GPTs: prompt injection attacks. Through
comprehensive testing of over 200 user-designed GPT models via adversarial
prompts, we demonstrate that these systems are susceptible to prompt
injections. Through prompt injection, an adversary can not only extract the
customized system prompts but also access the uploaded files. This paper
provides a first-hand analysis of the prompt injection, alongside the
evaluation of the possible mitigation of such attacks. Our findings underscore
the urgent need for robust security frameworks in the design and deployment of
customizable GPT models. The intent of this paper is to raise awareness and
prompt action in the AI community, ensuring that the benefits of GPT
customization do not come at the cost of compromised security and privacy
Topology-Preserving Automatic Labeling of Coronary Arteries via Anatomy-aware Connection Classifier
Automatic labeling of coronary arteries is an essential task in the practical
diagnosis process of cardiovascular diseases. For experienced radiologists, the
anatomically predetermined connections are important for labeling the artery
segments accurately, while this prior knowledge is barely explored in previous
studies. In this paper, we present a new framework called TopoLab which
incorporates the anatomical connections into the network design explicitly.
Specifically, the strategies of intra-segment feature aggregation and
inter-segment feature interaction are introduced for hierarchical segment
feature extraction. Moreover, we propose the anatomy-aware connection
classifier to enable classification for each connected segment pair, which
effectively exploits the prior topology among the arteries with different
categories. To validate the effectiveness of our method, we contribute
high-quality annotations of artery labeling to the public orCaScore dataset.
The experimental results on both the orCaScore dataset and an in-house dataset
show that our TopoLab has achieved state-of-the-art performance.Comment: Accepted by MICCAI 202
- …