472 research outputs found
Recommended from our members
A Comparative Analysis of Fine-Tuned Llama 3-8B and DistilBERT for News Classification
This thesis presents a comparative analysis of Llama 3-8B and DistilBERT language models for news classification across 26 classes. Utilizing a balanced dataset, we employed Low-Rank Adaptation (LoRA) for fine-tuning Llama 3-8B and traditional fine-tuning for DistilBERT. The study aims to evaluate the performance, efficiency, and practical applicability of these models in categorizing news articles.Our experiments reveal that Llama 3-8B consistently outperforms DistilBERT in overall accuracy, achieving around 70% compared to DistilBERT's 60%. However, both models demonstrate competitive capabilities and exhibit distinct strengths across different news categories. The analysis uncovers significant variability in category-specific performance across multiple experimental runs, emphasizing the importance of robust evaluation procedures in model assessment
Direct syngas-to-fuel: integration of Fischer-Tropsch synthesis and hydrocracking in micro-structured reactors
Vehicle-Vehicle Energy Interaction Converter of Electric Vehicles: A Disturbance Observer Based Sliding Mode Control Algorithm.
PEMT: Multi-Task Correlation Guided Mixture-of-Experts Enables Parameter-Efficient Transfer Learning
Parameter-efficient fine-tuning (PEFT) has emerged as an effective method for
adapting pre-trained language models to various tasks efficiently. Recently,
there has been a growing interest in transferring knowledge from one or
multiple tasks to the downstream target task to achieve performance
improvements. However, current approaches typically either train adapters on
individual tasks or distill shared knowledge from source tasks, failing to
fully exploit task-specific knowledge and the correlation between source and
target tasks. To overcome these limitations, we propose PEMT, a novel
parameter-efficient fine-tuning framework based on multi-task transfer
learning. PEMT extends the mixture-of-experts (MoE) framework to capture the
transferable knowledge as a weighted combination of adapters trained on source
tasks. These weights are determined by a gated unit, measuring the correlation
between the target and each source task using task description prompt vectors.
To fully exploit the task-specific knowledge, we also propose the Task Sparsity
Loss to improve the sparsity of the gated unit. We conduct experiments on a
broad range of tasks over 17 datasets. The experimental results demonstrate our
PEMT yields stable improvements over full fine-tuning, and state-of-the-art
PEFT and knowledge transferring methods on various tasks. The results highlight
the effectiveness of our method which is capable of sufficiently exploiting the
knowledge and correlation features across multiple tasks.Comment: Accepted to Findings of the ACL 202
Beyond Worst-case Attacks: Robust RL with Adaptive Defense via Non-dominated Policies
In light of the burgeoning success of reinforcement learning (RL) in diverse
real-world applications, considerable focus has been directed towards ensuring
RL policies are robust to adversarial attacks during test time. Current
approaches largely revolve around solving a minimax problem to prepare for
potential worst-case scenarios. While effective against strong attacks, these
methods often compromise performance in the absence of attacks or the presence
of only weak attacks. To address this, we study policy robustness under the
well-accepted state-adversarial attack model, extending our focus beyond only
worst-case attacks. We first formalize this task at test time as a regret
minimization problem and establish its intrinsic hardness in achieving
sublinear regret when the baseline policy is from a general continuous policy
class, . This finding prompts us to \textit{refine} the baseline policy
class prior to test time, aiming for efficient adaptation within a finite
policy class \Tilde{\Pi}, which can resort to an adversarial bandit
subroutine. In light of the importance of a small, finite \Tilde{\Pi}, we
propose a novel training-time algorithm to iteratively discover
\textit{non-dominated policies}, forming a near-optimal and minimal
\Tilde{\Pi}, thereby ensuring both robustness and test-time efficiency.
Empirical validation on the Mujoco corroborates the superiority of our approach
in terms of natural and robust performance, as well as adaptability to various
attack scenarios.Comment: International Conference on Learning Representations (ICLR) 2024,
spotligh
- …
