2,518 research outputs found
Multitask Learning for Network Traffic Classification
Traffic classification has various applications in today's Internet, from
resource allocation, billing and QoS purposes in ISPs to firewall and malware
detection in clients. Classical machine learning algorithms and deep learning
models have been widely used to solve the traffic classification task. However,
training such models requires a large amount of labeled data. Labeling data is
often the most difficult and time-consuming process in building a classifier.
To solve this challenge, we reformulate the traffic classification into a
multi-task learning framework where bandwidth requirement and duration of a
flow are predicted along with the traffic class. The motivation of this
approach is twofold: First, bandwidth requirement and duration are useful in
many applications, including routing, resource allocation, and QoS
provisioning. Second, these two values can be obtained from each flow easily
without the need for human labeling or capturing flows in a controlled and
isolated environment. We show that with a large amount of easily obtainable
data samples for bandwidth and duration prediction tasks, and only a few data
samples for the traffic classification task, one can achieve high accuracy. We
conduct two experiment with ISCX and QUIC public datasets and show the efficacy
of our approach
Automated Website Fingerprinting through Deep Learning
Several studies have shown that the network traffic that is generated by a
visit to a website over Tor reveals information specific to the website through
the timing and sizes of network packets. By capturing traffic traces between
users and their Tor entry guard, a network eavesdropper can leverage this
meta-data to reveal which website Tor users are visiting. The success of such
attacks heavily depends on the particular set of traffic features that are used
to construct the fingerprint. Typically, these features are manually engineered
and, as such, any change introduced to the Tor network can render these
carefully constructed features ineffective. In this paper, we show that an
adversary can automate the feature engineering process, and thus automatically
deanonymize Tor traffic by applying our novel method based on deep learning. We
collect a dataset comprised of more than three million network traces, which is
the largest dataset of web traffic ever used for website fingerprinting, and
find that the performance achieved by our deep learning approaches is
comparable to known methods which include various research efforts spanning
over multiple years. The obtained success rate exceeds 96% for a closed world
of 100 websites and 94% for our biggest closed world of 900 classes. In our
open world evaluation, the most performant deep learning model is 2% more
accurate than the state-of-the-art attack. Furthermore, we show that the
implicit features automatically learned by our approach are far more resilient
to dynamic changes of web content over time. We conclude that the ability to
automatically construct the most relevant traffic features and perform accurate
traffic recognition makes our deep learning based approach an efficient,
flexible and robust technique for website fingerprinting.Comment: To appear in the 25th Symposium on Network and Distributed System
Security (NDSS 2018
- …