265 research outputs found
Efficient classification using parallel and scalable compressed model and Its application on intrusion detection
In order to achieve high efficiency of classification in intrusion detection,
a compressed model is proposed in this paper which combines horizontal
compression with vertical compression. OneR is utilized as horizontal
com-pression for attribute reduction, and affinity propagation is employed as
vertical compression to select small representative exemplars from large
training data. As to be able to computationally compress the larger volume of
training data with scalability, MapReduce based parallelization approach is
then implemented and evaluated for each step of the model compression process
abovementioned, on which common but efficient classification methods can be
directly used. Experimental application study on two publicly available
datasets of intrusion detection, KDD99 and CMDC2012, demonstrates that the
classification using the compressed model proposed can effectively speed up the
detection procedure at up to 184 times, most importantly at the cost of a
minimal accuracy difference with less than 1% on average
Open Vocabulary Multi-Label Classification with Dual-Modal Decoder on Aligned Visual-Textual Features
In computer vision, multi-label recognition are important tasks with many
real-world applications, but classifying previously unseen labels remains a
significant challenge. In this paper, we propose a novel algorithm, Aligned
Dual moDality ClaSsifier (ADDS), which includes a Dual-Modal decoder
(DM-decoder) with alignment between visual and textual features, for
open-vocabulary multi-label classification tasks. Then we design a simple and
yet effective method called Pyramid-Forwarding to enhance the performance for
inputs with high resolutions. Moreover, the Selective Language Supervision is
applied to further enhance the model performance. Extensive experiments
conducted on several standard benchmarks, NUS-WIDE, ImageNet-1k, ImageNet-21k,
and MS-COCO, demonstrate that our approach significantly outperforms previous
methods and provides state-of-the-art performance for open-vocabulary
multi-label classification, conventional multi-label classification and an
extreme case called single-to-multi label classification where models trained
on single-label datasets (ImageNet-1k, ImageNet-21k) are tested on multi-label
ones (MS-COCO and NUS-WIDE).Comment: preprin
- …