51 research outputs found
Sparse Radial Sampling LBP for Writer Identification
In this paper we present the use of Sparse Radial Sampling Local Binary
Patterns, a variant of Local Binary Patterns (LBP) for text-as-texture
classification. By adapting and extending the standard LBP operator to the
particularities of text we get a generic text-as-texture classification scheme
and apply it to writer identification. In experiments on CVL and ICDAR 2013
datasets, the proposed feature-set demonstrates State-Of-the-Art (SOA)
performance. Among the SOA, the proposed method is the only one that is based
on dense extraction of a single local feature descriptor. This makes it fast
and applicable at the earliest stages in a DIA pipeline without the need for
segmentation, binarization, or extraction of multiple features.Comment: Submitted to the 13th International Conference on Document Analysis
and Recognition (ICDAR 2015
Domain-adaptive deep network compression
Deep Neural Networks trained on large datasets can be easily transferred to
new domains with far fewer labeled examples by a process called fine-tuning.
This has the advantage that representations learned in the large source domain
can be exploited on smaller target domains. However, networks designed to be
optimal for the source task are often prohibitively large for the target task.
In this work we address the compression of networks after domain transfer.
We focus on compression algorithms based on low-rank matrix decomposition.
Existing methods base compression solely on learned network weights and ignore
the statistics of network activations. We show that domain transfer leads to
large shifts in network activations and that it is desirable to take this into
account when compressing. We demonstrate that considering activation statistics
when compressing weights leads to a rank-constrained regression problem with a
closed-form solution. Because our method takes into account the target domain,
it can more optimally remove the redundancy in the weights. Experiments show
that our Domain Adaptive Low Rank (DALR) method significantly outperforms
existing low-rank compression techniques. With our approach, the fc6 layer of
VGG19 can be compressed more than 4x more than using truncated SVD alone --
with only a minor or no loss in accuracy. When applied to domain-transferred
networks it allows for compression down to only 5-20% of the original number of
parameters with only a minor drop in performance.Comment: Accepted at ICCV 201
- …