27,965 research outputs found

    Meta-model Pruning

    Get PDF
    Large and complex meta-models such as those of Uml and its profiles are growing due to modelling and inter-operability needs of numerous\ud stakeholders. The complexity of such meta-models has led to coining\ud of the term meta-muddle. Individual users often exercise only a small\ud view of a meta-muddle for tasks ranging from model creation to construction\ud of model transformations. What is the effective meta-model that represents\ud this view? We present a flexible meta-model pruning algorithm and\ud tool to extract effective meta-models from a meta-muddle. We use\ud the notion of model typing for meta-models to verify that the algorithm\ud generates a super-type of the large meta-model representing the meta-muddle.\ud This implies that all programs written using the effective meta-model\ud will work for the meta-muddle hence preserving backward compatibility.\ud All instances of the effective meta-model are also instances of the\ud meta-muddle. We illustrate how pruning the original Uml metamodel\ud produces different effective meta-models

    Real-Time Object Tracking via Meta-Learning: Efficient Model Adaptation and One-Shot Channel Pruning

    Full text link
    We propose a novel meta-learning framework for real-time object tracking with efficient model adaptation and channel pruning. Given an object tracker, our framework learns to fine-tune its model parameters in only a few iterations of gradient-descent during tracking while pruning its network channels using the target ground-truth at the first frame. Such a learning problem is formulated as a meta-learning task, where a meta-tracker is trained by updating its meta-parameters for initial weights, learning rates, and pruning masks through carefully designed tracking simulations. The integrated meta-tracker greatly improves tracking performance by accelerating the convergence of online learning and reducing the cost of feature computation. Experimental evaluation on the standard datasets demonstrates its outstanding accuracy and speed compared to the state-of-the-art methods.Comment: 9 pages, 5 figures, AAAI 2020 accepte

    Investigating Evaluation Measures in Ant Colony Algorithms for Learning Decision Tree Classifiers

    Get PDF
    Ant-Tree-Miner is a decision tree induction algorithm that is based on the Ant Colony Optimization (ACO) meta- heuristic. Ant-Tree-Miner-M is a recently introduced extension of Ant-Tree-Miner that learns multi-tree classification models. A multi-tree model consists of multiple decision trees, one for each class value, where each class-based decision tree is responsible for discriminating between its class value and all other values present in the class domain (one vs. all). In this paper, we investigate the use of 10 different classification quality evaluation measures in Ant-Tree-Miner-M, which are used for both candidate model evaluation and model pruning. Our experimental results, using 40 popular benchmark datasets, identify several quality functions that substantially improve on the simple Accuracy quality function that was previously used in Ant-Tree-Miner-M

    PruMUX: Augmenting Data Multiplexing with Model Compression

    Full text link
    As language models increase in size by the day, methods for efficient inference are critical to leveraging their capabilities for various applications. Prior work has investigated techniques like model pruning, knowledge distillation, and data multiplexing to increase model throughput without sacrificing accuracy. In this paper, we combine two such methods -- structured pruning and data multiplexing -- to compound the speedup gains obtained by either method. Our approach, PruMUX, obtains up to 7.5-29.5X throughput improvement over BERT-base model with accuracy threshold from 80% to 74%. We further study various combinations of parameters (such as sparsity and multiplexing factor) in the two techniques to provide a comprehensive analysis of the tradeoff between accuracy and throughput in the resulting models. We then propose Auto-PruMUX, a meta-level model that can predict the high-performance parameters for pruning and multiplexing given a desired accuracy loss budget, providing a practical method to leverage the combination effectively.Comment: Published at Findings of the Association for Computational Linguistics (ACL 2023
    • …
    corecore