7,652 research outputs found
Binary Patterns Encoded Convolutional Neural Networks for Texture Recognition and Remote Sensing Scene Classification
Designing discriminative powerful texture features robust to realistic
imaging conditions is a challenging computer vision problem with many
applications, including material recognition and analysis of satellite or
aerial imagery. In the past, most texture description approaches were based on
dense orderless statistical distribution of local features. However, most
recent approaches to texture recognition and remote sensing scene
classification are based on Convolutional Neural Networks (CNNs). The d facto
practice when learning these CNN models is to use RGB patches as input with
training performed on large amounts of labeled data (ImageNet). In this paper,
we show that Binary Patterns encoded CNN models, codenamed TEX-Nets, trained
using mapped coded images with explicit texture information provide
complementary information to the standard RGB deep models. Additionally, two
deep architectures, namely early and late fusion, are investigated to combine
the texture and color information. To the best of our knowledge, we are the
first to investigate Binary Patterns encoded CNNs and different deep network
fusion architectures for texture recognition and remote sensing scene
classification. We perform comprehensive experiments on four texture
recognition datasets and four remote sensing scene classification benchmarks:
UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with
7 categories and the recently introduced large scale aerial image dataset (AID)
with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary
information to standard RGB deep model of the same network architecture. Our
late fusion TEX-Net architecture always improves the overall performance
compared to the standard RGB network on both recognition problems. Our final
combination outperforms the state-of-the-art without employing fine-tuning or
ensemble of RGB network architectures.Comment: To appear in ISPRS Journal of Photogrammetry and Remote Sensin
Convolutional Neural Network on Three Orthogonal Planes for Dynamic Texture Classification
Dynamic Textures (DTs) are sequences of images of moving scenes that exhibit
certain stationarity properties in time such as smoke, vegetation and fire. The
analysis of DT is important for recognition, segmentation, synthesis or
retrieval for a range of applications including surveillance, medical imaging
and remote sensing. Deep learning methods have shown impressive results and are
now the new state of the art for a wide range of computer vision tasks
including image and video recognition and segmentation. In particular,
Convolutional Neural Networks (CNNs) have recently proven to be well suited for
texture analysis with a design similar to a filter bank approach. In this
paper, we develop a new approach to DT analysis based on a CNN method applied
on three orthogonal planes x y , xt and y t . We train CNNs on spatial frames
and temporal slices extracted from the DT sequences and combine their outputs
to obtain a competitive DT classifier. Our results on a wide range of commonly
used DT classification benchmark datasets prove the robustness of our approach.
Significant improvement of the state of the art is shown on the larger
datasets.Comment: 19 pages, 10 figure
Classification of Time-Series Images Using Deep Convolutional Neural Networks
Convolutional Neural Networks (CNN) has achieved a great success in image
recognition task by automatically learning a hierarchical feature
representation from raw data. While the majority of Time-Series Classification
(TSC) literature is focused on 1D signals, this paper uses Recurrence Plots
(RP) to transform time-series into 2D texture images and then take advantage of
the deep CNN classifier. Image representation of time-series introduces
different feature types that are not available for 1D signals, and therefore
TSC can be treated as texture image recognition task. CNN model also allows
learning different levels of representations together with a classifier,
jointly and automatically. Therefore, using RP and CNN in a unified framework
is expected to boost the recognition rate of TSC. Experimental results on the
UCR time-series classification archive demonstrate competitive accuracy of the
proposed approach, compared not only to the existing deep architectures, but
also to the state-of-the art TSC algorithms.Comment: The 10th International Conference on Machine Vision (ICMV 2017
FoodNet: Recognizing Foods Using Ensemble of Deep Networks
In this work we propose a methodology for an automatic food classification
system which recognizes the contents of the meal from the images of the food.
We developed a multi-layered deep convolutional neural network (CNN)
architecture that takes advantages of the features from other deep networks and
improves the efficiency. Numerous classical handcrafted features and approaches
are explored, among which CNNs are chosen as the best performing features.
Networks are trained and fine-tuned using preprocessed images and the filter
outputs are fused to achieve higher accuracy. Experimental results on the
largest real-world food recognition database ETH Food-101 and newly contributed
Indian food image database demonstrate the effectiveness of the proposed
methodology as compared to many other benchmark deep learned CNN frameworks.Comment: 5 pages, 3 figures, 3 tables, IEEE Signal Processing Letter
- …