3,419 research outputs found
Deciding How to Decide: Dynamic Routing in Artificial Neural Networks
We propose and systematically evaluate three strategies for training
dynamically-routed artificial neural networks: graphs of learned
transformations through which different input signals may take different paths.
Though some approaches have advantages over others, the resulting networks are
often qualitatively similar. We find that, in dynamically-routed networks
trained to classify images, layers and branches become specialized to process
distinct categories of images. Additionally, given a fixed computational
budget, dynamically-routed networks tend to perform better than comparable
statically-routed networks.Comment: ICML 2017. Code at https://github.com/MasonMcGill/multipath-nn Video
abstract at https://youtu.be/NHQsDaycwy
VideoCapsuleNet: A Simplified Network for Action Detection
The recent advances in Deep Convolutional Neural Networks (DCNNs) have shown
extremely good results for video human action classification, however, action
detection is still a challenging problem. The current action detection
approaches follow a complex pipeline which involves multiple tasks such as tube
proposals, optical flow, and tube classification. In this work, we present a
more elegant solution for action detection based on the recently developed
capsule network. We propose a 3D capsule network for videos, called
VideoCapsuleNet: a unified network for action detection which can jointly
perform pixel-wise action segmentation along with action classification. The
proposed network is a generalization of capsule network from 2D to 3D, which
takes a sequence of video frames as input. The 3D generalization drastically
increases the number of capsules in the network, making capsule routing
computationally expensive. We introduce capsule-pooling in the convolutional
capsule layer to address this issue which makes the voting algorithm tractable.
The routing-by-agreement in the network inherently models the action
representations and various action characteristics are captured by the
predicted capsules. This inspired us to utilize the capsules for action
localization and the class-specific capsules predicted by the network are used
to determine a pixel-wise localization of actions. The localization is further
improved by parameterized skip connections with the convolutional capsule
layers and the network is trained end-to-end with a classification as well as
localization loss. The proposed network achieves sate-of-the-art performance on
multiple action detection datasets including UCF-Sports, J-HMDB, and UCF-101
(24 classes) with an impressive ~20% improvement on UCF-101 and ~15%
improvement on J-HMDB in terms of v-mAP scores
SECaps: A Sequence Enhanced Capsule Model for Charge Prediction
Automatic charge prediction aims to predict appropriate final charges
according to the fact descriptions for a given criminal case. Automatic charge
prediction plays a critical role in assisting judges and lawyers to improve the
efficiency of legal decisions, and thus has received much attention.
Nevertheless, most existing works on automatic charge prediction perform
adequately on high-frequency charges but are not yet capable of predicting
few-shot charges with limited cases. In this paper, we propose a Sequence
Enhanced Capsule model, dubbed as SECaps model, to relieve this problem.
Specifically, following the work of capsule networks, we propose the seq-caps
layer, which considers sequence information and spatial information of legal
texts simultaneously. Then we design a attention residual unit, which provides
auxiliary information for charge prediction. In addition, our SECaps model
introduces focal loss, which relieves the problem of imbalanced charges.
Comparing the state-of-the-art methods, our SECaps model obtains 4.5% and 6.4%
absolutely considerable improvements under Macro F1 in Criminal-S and
Criminal-L respectively. The experimental results consistently demonstrate the
superiorities and competitiveness of our proposed model.Comment: 13 pages, 3figures, 5 table
- …