17,105 research outputs found
Attributed Graph Classification via Deep Graph Convolutional Neural Networks
From social networks to biological networks, graphs are a natural way to represent a diverse set of real-world data. This research presents attributed graph convolutional neural network with a pooling layer (AGCP for short), a novel end-to-end deep neural network model which captures the higher-order latent attributes of weighted, labeled, undirected, attributed graphs of arbitrary size. The architecture of AGCP is an efficient variant of convolutional neural network (CNN) and has a linear filter function that convolves over the fixed topological structure of a graph to learn local and global attributes of the graph. Convolution is followed by a pooling layer that coarsens the graph while preserving the global structure of the original input graph using information gain. On the other hand, advances in high throughput technologies for next-generation sequencing have enabled machine learning research to acquire and extract knowledge from biological networks. We apply AGCP on three bioinformatics networks, ENZYMES, D&D, and GINA a graph dataset of gene interaction networks with genomic mutation attributes as the attributes of the vertices. In several experiments on these datasets, we demonstrate that AGCP yields better results in terms of classification accuracy relative to the previously proposed models by a considerable margin
PANDA: Pose Aligned Networks for Deep Attribute Modeling
We propose a method for inferring human attributes (such as gender, hair
style, clothes style, expression, action) from images of people under large
variation of viewpoint, pose, appearance, articulation and occlusion.
Convolutional Neural Nets (CNN) have been shown to perform very well on large
scale object recognition problems. In the context of attribute classification,
however, the signal is often subtle and it may cover only a small part of the
image, while the image is dominated by the effects of pose and viewpoint.
Discounting for pose variation would require training on very large labeled
datasets which are not presently available. Part-based models, such as poselets
and DPM have been shown to perform well for this problem but they are limited
by shallow low-level features. We propose a new method which combines
part-based models and deep learning by training pose-normalized CNNs. We show
substantial improvement vs. state-of-the-art methods on challenging attribute
classification tasks in unconstrained settings. Experiments confirm that our
method outperforms both the best part-based methods on this problem and
conventional CNNs trained on the full bounding box of the person.Comment: 8 page
End-to-End Localization and Ranking for Relative Attributes
We propose an end-to-end deep convolutional network to simultaneously
localize and rank relative visual attributes, given only weakly-supervised
pairwise image comparisons. Unlike previous methods, our network jointly learns
the attribute's features, localization, and ranker. The localization module of
our network discovers the most informative image region for the attribute,
which is then used by the ranking module to learn a ranking model of the
attribute. Our end-to-end framework also significantly speeds up processing and
is much faster than previous methods. We show state-of-the-art ranking results
on various relative attribute datasets, and our qualitative localization
results clearly demonstrate our network's ability to learn meaningful image
patches.Comment: Appears in European Conference on Computer Vision (ECCV), 201
Irregular Convolutional Neural Networks
Convolutional kernels are basic and vital components of deep Convolutional
Neural Networks (CNN). In this paper, we equip convolutional kernels with shape
attributes to generate the deep Irregular Convolutional Neural Networks (ICNN).
Compared to traditional CNN applying regular convolutional kernels like
, our approach trains irregular kernel shapes to better fit the
geometric variations of input features. In other words, shapes are learnable
parameters in addition to weights. The kernel shapes and weights are learned
simultaneously during end-to-end training with the standard back-propagation
algorithm. Experiments for semantic segmentation are implemented to validate
the effectiveness of our proposed ICNN.Comment: 7 pages, 5 figures, 3 table
Exploiting Local Features from Deep Networks for Image Retrieval
Deep convolutional neural networks have been successfully applied to image
classification tasks. When these same networks have been applied to image
retrieval, the assumption has been made that the last layers would give the
best performance, as they do in classification. We show that for instance-level
image retrieval, lower layers often perform better than the last layers in
convolutional neural networks. We present an approach for extracting
convolutional features from different layers of the networks, and adopt VLAD
encoding to encode features into a single vector for each image. We investigate
the effect of different layers and scales of input images on the performance of
convolutional features using the recent deep networks OxfordNet and GoogLeNet.
Experiments demonstrate that intermediate layers or higher layers with finer
scales produce better results for image retrieval, compared to the last layer.
When using compressed 128-D VLAD descriptors, our method obtains
state-of-the-art results and outperforms other VLAD and CNN based approaches on
two out of three test datasets. Our work provides guidance for transferring
deep networks trained on image classification to image retrieval tasks.Comment: CVPR DeepVision Workshop 201
- …