3,901 research outputs found
Weakly supervised deep learning for the detection of domain generation algorithms
Domain generation algorithms (DGAs) have become commonplace in malware that seeks to establish command and control communication between an infected machine and the botmaster. DGAs dynamically and consistently generate large volumes of malicious domain names, only a few of which are registered by the botmaster, within a short time window around their generation time, and subsequently resolved when the malware on the infected machine tries to access them. Deep neural networks that can classify domain names as benign or malicious are of great interest in the real-time defense against DGAs. In contrast with traditional machine learning models, deep networks do not rely on human engineered features. Instead, they can learn features automatically from data, provided that they are supplied with sufficiently large amounts of suitable training data. Obtaining cleanly labeled ground truth data is difficult and time consuming. Heuristically labeled data could potentially provide a source of training data for weakly supervised training of DGA detectors. We propose a set of heuristics for automatically labeling domain names monitored in real traffic, and then train and evaluate classifiers with the proposed heuristically labeled dataset. We show through experiments on a dataset with 50 million domain names that such heuristically labeled data is very useful in practice to improve the predictive accuracy of deep learning-based DGA classifiers, and that these deep neural networks significantly outperform a random forest classifier with human engineered features
Dynamic Adaptation on Non-Stationary Visual Domains
Domain adaptation aims to learn models on a supervised source domain that
perform well on an unsupervised target. Prior work has examined domain
adaptation in the context of stationary domain shifts, i.e. static data sets.
However, with large-scale or dynamic data sources, data from a defined domain
is not usually available all at once. For instance, in a streaming data
scenario, dataset statistics effectively become a function of time. We
introduce a framework for adaptation over non-stationary distribution shifts
applicable to large-scale and streaming data scenarios. The model is adapted
sequentially over incoming unsupervised streaming data batches. This enables
improvements over several batches without the need for any additionally
annotated data. To demonstrate the effectiveness of our proposed framework, we
modify associative domain adaptation to work well on source and target data
batches with unequal class distributions. We apply our method to several
adaptation benchmark datasets for classification and show improved classifier
accuracy not only for the currently adapted batch, but also when applied on
future stream batches. Furthermore, we show the applicability of our
associative learning modifications to semantic segmentation, where we achieve
competitive results
- …