1,083 research outputs found
3D Human Activity Recognition with Reconfigurable Convolutional Neural Networks
Human activity understanding with 3D/depth sensors has received increasing
attention in multimedia processing and interactions. This work targets on
developing a novel deep model for automatic activity recognition from RGB-D
videos. We represent each human activity as an ensemble of cubic-like video
segments, and learn to discover the temporal structures for a category of
activities, i.e. how the activities to be decomposed in terms of
classification. Our model can be regarded as a structured deep architecture, as
it extends the convolutional neural networks (CNNs) by incorporating structure
alternatives. Specifically, we build the network consisting of 3D convolutions
and max-pooling operators over the video segments, and introduce the latent
variables in each convolutional layer manipulating the activation of neurons.
Our model thus advances existing approaches in two aspects: (i) it acts
directly on the raw inputs (grayscale-depth data) to conduct recognition
instead of relying on hand-crafted features, and (ii) the model structure can
be dynamically adjusted accounting for the temporal variations of human
activities, i.e. the network configuration is allowed to be partially activated
during inference. For model training, we propose an EM-type optimization method
that iteratively (i) discovers the latent structure by determining the
decomposed actions for each training example, and (ii) learns the network
parameters by using the back-propagation algorithm. Our approach is validated
in challenging scenarios, and outperforms state-of-the-art methods. A large
human activity database of RGB-D videos is presented in addition.Comment: This manuscript has 10 pages with 9 figures, and a preliminary
version was published in ACM MM'14 conferenc
Text Coherence Analysis Based on Deep Neural Network
In this paper, we propose a novel deep coherence model (DCM) using a
convolutional neural network architecture to capture the text coherence. The
text coherence problem is investigated with a new perspective of learning
sentence distributional representation and text coherence modeling
simultaneously. In particular, the model captures the interactions between
sentences by computing the similarities of their distributional
representations. Further, it can be easily trained in an end-to-end fashion.
The proposed model is evaluated on a standard Sentence Ordering task. The
experimental results demonstrate its effectiveness and promise in coherence
assessment showing a significant improvement over the state-of-the-art by a
wide margin.Comment: 4 pages, 2 figures, CIKM 201
Kernel Graph Convolutional Neural Networks
Graph kernels have been successfully applied to many graph classification
problems. Typically, a kernel is first designed, and then an SVM classifier is
trained based on the features defined implicitly by this kernel. This two-stage
approach decouples data representation from learning, which is suboptimal. On
the other hand, Convolutional Neural Networks (CNNs) have the capability to
learn their own features directly from the raw data during training.
Unfortunately, they cannot handle irregular data such as graphs. We address
this challenge by using graph kernels to embed meaningful local neighborhoods
of the graphs in a continuous vector space. A set of filters is then convolved
with these patches, pooled, and the output is then passed to a feedforward
network. With limited parameter tuning, our approach outperforms strong
baselines on 7 out of 10 benchmark datasets.Comment: Accepted at ICANN '1
CoupleNet: Coupling Global Structure with Local Parts for Object Detection
The region-based Convolutional Neural Network (CNN) detectors such as Faster
R-CNN or R-FCN have already shown promising results for object detection by
combining the region proposal subnetwork and the classification subnetwork
together. Although R-FCN has achieved higher detection speed while keeping the
detection performance, the global structure information is ignored by the
position-sensitive score maps. To fully explore the local and global
properties, in this paper, we propose a novel fully convolutional network,
named as CoupleNet, to couple the global structure with local parts for object
detection. Specifically, the object proposals obtained by the Region Proposal
Network (RPN) are fed into the the coupling module which consists of two
branches. One branch adopts the position-sensitive RoI (PSRoI) pooling to
capture the local part information of the object, while the other employs the
RoI pooling to encode the global and context information. Next, we design
different coupling strategies and normalization ways to make full use of the
complementary advantages between the global and local branches. Extensive
experiments demonstrate the effectiveness of our approach. We achieve
state-of-the-art results on all three challenging datasets, i.e. a mAP of 82.7%
on VOC07, 80.4% on VOC12, and 34.4% on COCO. Codes will be made publicly
available.Comment: Accepted by ICCV 201
- …