4 research outputs found

    Subspace regularizers for few-shot class incremental learning

    Full text link
    Few-shot class incremental learning---the problem of updating a trained classifier to discriminate among an expanded set of classes with limited labeled data---is a key challenge for machine learning systems deployed in non-stationary environments. Existing approaches to the problem rely on complex model architectures and training procedures that are difficult to tune and re-use. In this paper, we present an extremely simple approach that enables the use of ordinary logistic regression classifiers for few-shot incremental learning. The key to this approach is a new family of subspace regularization schemes that encourage weight vectors for new classes to lie close to the subspace spanned by the weights of existing classes. When combined with pretrained convolutional feature extractors, logistic regression models trained with subspace regularization outperform specialized, state-of-the-art approaches to few-shot incremental image classification by up to 23% on the miniImageNet dataset. Because of its simplicity, subspace regularization can be straightforwardly configured to incorporate additional background information about the new classes (including class names and descriptions specified in natural language); this offers additional control over the trade-off between existing and new classes. Our results show that simple geometric regularization of class representations offers an effective tool for continual learning.000000000000000000000000000000000000000000000000000000010241 - University of California, Berkeleyhttps://openreview.net/forum?id=boJy41J-tnQFirst author draf

    BACKpropagation through BACK substitution with a BACKslash

    Full text link
    We present a linear algebra formulation of backpropagation which allows the calculation of gradients by using a generically written ``backslash'' or Gaussian elimination on triangular systems of equations. Generally the matrix elements are operators. This paper has three contributions: 1. It is of intellectual value to replace traditional treatments of automatic differentiation with a (left acting) operator theoretic, graph-based approach. 2. Operators can be readily placed in matrices in software in programming languages such as Ju lia as an implementation option. 3. We introduce a novel notation, ``transpose dot'' operator ``{}T∙\{\}^{T_\bullet}'' that allows the reversal of operators. We demonstrate the elegance of the operators approach in a suitable programming language consisting of generic linear algebra operators such as Julia \cite{bezanson2017julia}, and that it is possible to realize this abstraction in code. Our implementation shows how generic linear algebra can allow operators as elements of matrices, and without rewriting any code, the software carries through to completion giving the correct answer.Comment: 21 page

    Compositional Models for Few Shot Sequence Learning

    No full text
    Flexible neural sequence models outperform grammar- and automaton-based counterparts on a variety of tasks. However, neural models perform poorly in settings requiring compositional generalization beyond the training data—particularly to rare or unseen subsequences. Past work has found symbolic scaffolding (e.g. grammars or automata) essential in these settings. We describe two simpler and more general modeling approaches that enable a large category of compositional generalizations without appeal to latent symbolic structure. The first is a data augmentation scheme called R&R, built from two components: recombination of original training examples via a prototype-based generative model and esampling of generated examples to encourage extrapolation. Training an ordinary neural sequence model on a dataset augmented with recombined and resampled examples significantly improves generalization in two language processing problems—instruction following SCAN and morphological analysis SIGMORPHON (2018)—where R&R enables learning of new constructions and tenses from as few as eight initial examples. The second is a lexical translation mechanism for neural sequence modeling. Previous work shows that many failures of systematic generalization arise from neural models' inability to disentangle lexical phenomena from syntactic ones. To address this, we augment neural decoders with a lexical translation mechanism that generalizes existing copy mechanisms to incorporate learned, decontextualized, token-level translation rules. We describe how to initialize this mechanism using a variety of lexicon learning algorithms, and show that it improves systematic generalization on a diverse set of sequence modeling tasks drawn from cognitive science, logical semantics, and machine translation.S.M

    Lexicon Learning for Few Shot Sequence Modeling

    No full text
    corecore