2,891 research outputs found
Multitask Learning for Network Traffic Classification
Traffic classification has various applications in today's Internet, from
resource allocation, billing and QoS purposes in ISPs to firewall and malware
detection in clients. Classical machine learning algorithms and deep learning
models have been widely used to solve the traffic classification task. However,
training such models requires a large amount of labeled data. Labeling data is
often the most difficult and time-consuming process in building a classifier.
To solve this challenge, we reformulate the traffic classification into a
multi-task learning framework where bandwidth requirement and duration of a
flow are predicted along with the traffic class. The motivation of this
approach is twofold: First, bandwidth requirement and duration are useful in
many applications, including routing, resource allocation, and QoS
provisioning. Second, these two values can be obtained from each flow easily
without the need for human labeling or capturing flows in a controlled and
isolated environment. We show that with a large amount of easily obtainable
data samples for bandwidth and duration prediction tasks, and only a few data
samples for the traffic classification task, one can achieve high accuracy. We
conduct two experiment with ISCX and QUIC public datasets and show the efficacy
of our approach
Listening for Sirens: Locating and Classifying Acoustic Alarms in City Scenes
This paper is about alerting acoustic event detection and sound source
localisation in an urban scenario. Specifically, we are interested in spotting
the presence of horns, and sirens of emergency vehicles. In order to obtain a
reliable system able to operate robustly despite the presence of traffic noise,
which can be copious, unstructured and unpredictable, we propose to treat the
spectrograms of incoming stereo signals as images, and apply semantic
segmentation, based on a Unet architecture, to extract the target sound from
the background noise. In a multi-task learning scheme, together with signal
denoising, we perform acoustic event classification to identify the nature of
the alerting sound. Lastly, we use the denoised signals to localise the
acoustic source on the horizon plane, by regressing the direction of arrival of
the sound through a CNN architecture. Our experimental evaluation shows an
average classification rate of 94%, and a median absolute error on the
localisation of 7.5{\deg} when operating on audio frames of 0.5s, and of
2.5{\deg} when operating on frames of 2.5s. The system offers excellent
performance in particularly challenging scenarios, where the noise level is
remarkably high.Comment: 6 pages, 9 figure
S-OHEM: Stratified Online Hard Example Mining for Object Detection
One of the major challenges in object detection is to propose detectors with
highly accurate localization of objects. The online sampling of high-loss
region proposals (hard examples) uses the multitask loss with equal weight
settings across all loss types (e.g, classification and localization, rigid and
non-rigid categories) and ignores the influence of different loss distributions
throughout the training process, which we find essential to the training
efficacy. In this paper, we present the Stratified Online Hard Example Mining
(S-OHEM) algorithm for training higher efficiency and accuracy detectors.
S-OHEM exploits OHEM with stratified sampling, a widely-adopted sampling
technique, to choose the training examples according to this influence during
hard example mining, and thus enhance the performance of object detectors. We
show through systematic experiments that S-OHEM yields an average precision
(AP) improvement of 0.5% on rigid categories of PASCAL VOC 2007 for both the
IoU threshold of 0.6 and 0.7. For KITTI 2012, both results of the same metric
are 1.6%. Regarding the mean average precision (mAP), a relative increase of
0.3% and 0.5% (1% and 0.5%) is observed for VOC07 (KITTI12) using the same set
of IoU threshold. Also, S-OHEM is easy to integrate with existing region-based
detectors and is capable of acting with post-recognition level regressors.Comment: 9 pages, 3 figures, accepted by CCCV 201
Latent Multi-task Architecture Learning
Multi-task learning (MTL) allows deep neural networks to learn from related
tasks by sharing parameters with other networks. In practice, however, MTL
involves searching an enormous space of possible parameter sharing
architectures to find (a) the layers or subspaces that benefit from sharing,
(b) the appropriate amount of sharing, and (c) the appropriate relative weights
of the different task losses. Recent work has addressed each of the above
problems in isolation. In this work we present an approach that learns a latent
multi-task architecture that jointly addresses (a)--(c). We present experiments
on synthetic data and data from OntoNotes 5.0, including four different tasks
and seven different domains. Our extension consistently outperforms previous
approaches to learning latent architectures for multi-task problems and
achieves up to 15% average error reductions over common approaches to MTL.Comment: To appear in Proceedings of AAAI 201
Panoptic Segmentation
We propose and study a task we name panoptic segmentation (PS). Panoptic
segmentation unifies the typically distinct tasks of semantic segmentation
(assign a class label to each pixel) and instance segmentation (detect and
segment each object instance). The proposed task requires generating a coherent
scene segmentation that is rich and complete, an important step toward
real-world vision systems. While early work in computer vision addressed
related image/scene parsing tasks, these are not currently popular, possibly
due to lack of appropriate metrics or associated recognition challenges. To
address this, we propose a novel panoptic quality (PQ) metric that captures
performance for all classes (stuff and things) in an interpretable and unified
manner. Using the proposed metric, we perform a rigorous study of both human
and machine performance for PS on three existing datasets, revealing
interesting insights about the task. The aim of our work is to revive the
interest of the community in a more unified view of image segmentation.Comment: accepted to CVPR 201
- …