112 research outputs found
RL-MD: A Novel Reinforcement Learning Approach for DNA Motif Discovery
The extraction of sequence patterns from a collection of functionally linked
unlabeled DNA sequences is known as DNA motif discovery, and it is a key task
in computational biology. Several deep learning-based techniques have recently
been introduced to address this issue. However, these algorithms can not be
used in real-world situations because of the need for labeled data. Here, we
presented RL-MD, a novel reinforcement learning based approach for DNA motif
discovery task. RL-MD takes unlabelled data as input, employs a relative
information-based method to evaluate each proposed motif, and utilizes these
continuous evaluation results as the reward. The experiments show that RL-MD
can identify high-quality motifs in real-world data.Comment: This paper is accepted by DSAA2022. The 9th IEEE International
Conference on Data Science and Advanced Analytic
FedET: A Communication-Efficient Federated Class-Incremental Learning Framework Based on Enhanced Transformer
Federated Learning (FL) has been widely concerned for it enables
decentralized learning while ensuring data privacy. However, most existing
methods unrealistically assume that the classes encountered by local clients
are fixed over time. After learning new classes, this assumption will make the
model's catastrophic forgetting of old classes significantly severe. Moreover,
due to the limitation of communication cost, it is challenging to use
large-scale models in FL, which will affect the prediction accuracy. To address
these challenges, we propose a novel framework, Federated Enhanced Transformer
(FedET), which simultaneously achieves high accuracy and low communication
cost. Specifically, FedET uses Enhancer, a tiny module, to absorb and
communicate new knowledge, and applies pre-trained Transformers combined with
different Enhancers to ensure high precision on various tasks. To address local
forgetting caused by new classes of new tasks and global forgetting brought by
non-i.i.d (non-independent and identically distributed) class imbalance across
different local clients, we proposed an Enhancer distillation method to modify
the imbalance between old and new knowledge and repair the non-i.i.d. problem.
Experimental results demonstrate that FedET's average accuracy on
representative benchmark datasets is 14.1% higher than the state-of-the-art
method, while FedET saves 90% of the communication cost compared to the
previous method.Comment: Accepted by 2023 International Joint Conference on Artificial
Intelligence (IJCAI2023
INCPrompt: Task-Aware incremental Prompting for Rehearsal-Free Class-incremental Learning
This paper introduces INCPrompt, an innovative continual learning solution
that effectively addresses catastrophic forgetting. INCPrompt's key innovation
lies in its use of adaptive key-learner and task-aware prompts that capture
task-relevant information. This unique combination encapsulates general
knowledge across tasks and encodes task-specific knowledge. Our comprehensive
evaluation across multiple continual learning benchmarks demonstrates
INCPrompt's superiority over existing algorithms, showing its effectiveness in
mitigating catastrophic forgetting while maintaining high performance. These
results highlight the significant impact of task-aware incremental prompting on
continual learning performance.Comment: Accepted by the 49th IEEE International Conference on Acoustics,
Speech, and Signal Processing (ICASSP 2024
- …