Skip to main content
Article thumbnail
Location of Repository

Learning Compressible Models

By Yi Zhang, Jeff Schneider and Artur Dubrawski

Abstract

In this paper, we study the combination of compression and ℓ1-norm regularization in a machine learning context: learning compressible models. By including a compression operation into the ℓ1 regularization, the assumption on model sparsity is relaxed to compressibility: model coefficients are compressed before being penalized, and sparsity is achieved in a compressed domain rather than the original space. We focus on the design of different compression operations, by which we can encode various compressibility assumptions and inductive biases, e.g., piecewise local smoothness, compacted energy in the frequency domain, and semantic correlation. We show that use of a compression operation provides an opportunity to leverage auxiliary information from various sources, e.g., domain knowledge, coding theories, unlabeled data. We conduct extensive experiments on brain-computer interfacing, handwritten character recognition and text classification. Empirical results show clear improvements in prediction performance by including compression in ℓ1 regularization. We also analyze the learned model coefficients under appropriate compressibility assumptions, which further demonstrate the advantages of learning compressible models instead of sparse models

Topics: linear models, ℓ1 regularization, sparse models
Year: 2011
OAI identifier: oai:CiteSeerX.psu:10.1.1.187.1829
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://www.siam.org/proceeding... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.