A LoRa Based Method for Efficient Model Parameter Reduction

Abstract

This paper presents an efficient model parameter reduction technique that is applicable across a wide range of neural network architectures, including convolutional neural networks (CNNs) and transformer-based models. Motivated by the LoRA (low-rank adaptation) method originally proposed for efficiently fine-tuning transformer architectures, the presented technique aims to enhance model parameter efficiency by leveraging a generalized approach that can be seamlessly integrated into virtually any network architecture. Extensive experiments with state-of-the-art CNN and transformer models demonstrate the robustness and versatility of the proposed technique in improving model parameter efficiency by achieving the same level of accuracy while using almost half as many parameters. The results highlight potential of this method as a universal optimization strategy for modern deep learning frameworks, offering a valuable tool for practitioners and researchers seeking to lower computation load and memory usage of deep models during inference. © 2025 Elsevier B.V., All rights reserved

Similar works

Full text

thumbnail-image

TOBB ETU GCRIS Database

redirect
Last time updated on 20/11/2025

This paper was published in TOBB ETU GCRIS Database.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.