33 research outputs found
Scale Selective Extended Local Binary Pattern for Texture Classification
In this paper, we propose a new texture descriptor, scale selective extended
local binary pattern (SSELBP), to characterize texture images with scale
variations. We first utilize multi-scale extended local binary patterns (ELBP)
with rotation-invariant and uniform mappings to capture robust local micro- and
macro-features. Then, we build a scale space using Gaussian filters and
calculate the histogram of multi-scale ELBPs for the image at each scale.
Finally, we select the maximum values from the corresponding bins of
multi-scale ELBP histograms at different scales as scale-invariant features. A
comprehensive evaluation on public texture databases (KTH-TIPS and UMD) shows
that the proposed SSELBP has high accuracy comparable to state-of-the-art
texture descriptors on gray-scale-, rotation-, and scale-invariant texture
classification but uses only one-third of the feature dimension.Comment: IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP), 201
Hybrid networks: Improving deep learning networks via integrating two views of images
© 2018, Springer Nature Switzerland AG. The principal component analysis network (PCANet) is an unsupervised parsimonious deep network, utilizing principal components as filters in the layers. It creates an amalgamated view of the data by transforming it into column vectors which destroys its spatial structure while obtaining the principal components. In this research, we first propose a tensor-factorization based method referred as the Tensor Factorization Networks (TFNet). The TFNet retains the spatial structure of the data by preserving its individual modes. This presentation provides a minutiae view of the data while extracting matrix factors. However, the above methods are restricted to extract a single representation and thus incurs information loss. To alleviate this information loss with the above methods we propose Hybrid Network (HybridNet) to simultaneously learn filters from both the views of the data. Comprehensive results on multiple benchmark datasets validate the superiority of integrating both the views of the data in our proposed HybridNet