83 research outputs found
Pairwise Confusion for Fine-Grained Visual Classification
Fine-Grained Visual Classification (FGVC) datasets contain small sample
sizes, along with significant intra-class variation and inter-class similarity.
While prior work has addressed intra-class variation using localization and
segmentation techniques, inter-class similarity may also affect feature
learning and reduce classification performance. In this work, we address this
problem using a novel optimization procedure for the end-to-end neural network
training on FGVC tasks. Our procedure, called Pairwise Confusion (PC) reduces
overfitting by intentionally {introducing confusion} in the activations. With
PC regularization, we obtain state-of-the-art performance on six of the most
widely-used FGVC datasets and demonstrate improved localization ability. {PC}
is easy to implement, does not need excessive hyperparameter tuning during
training, and does not add significant overhead during test time.Comment: Camera-Ready version for ECCV 201
Comparison of Secret Splitting, Secret Sharing and Recursive Threshold Visual Cryptography for Security of Handwritten Images
The secret sharing is a method to protect confidentiality and integrity of the secret messages by distributing the message shares into several recipients. The secret message could not be revealed unless the recipients exchange and collect shares to reconstruct the actual message. Even though the attacker obtain shares shadow during the share exchange, it would be impossible for the attacker to understand the correct share. There are few algorithms have been developed for secret sharing, e.g. secret splitting, Asmuth-Bloom secret sharing protocol, visual cryptography, etc. There is an unanswered question in this research about which method provides best level of security and efficiency in securing message. In this paper, we evaluate the performance of three methods, i.e. secret splitting, secret sharing, and recursive threshold visual cryptography for handwritten image security in terms of execution time and mean squared error (MSE) simulation. Simulation results show the secret splitting algorithm produces the shortest time of execution. On the other hand, the MSE simulation result that the three methods can reconstruct the original image very well
Inline Detection of Domain Generation Algorithms with Context-Sensitive Word Embeddings
Domain generation algorithms (DGAs) are frequently employed by malware to
generate domains used for connecting to command-and-control (C2) servers.
Recent work in DGA detection leveraged deep learning architectures like
convolutional neural networks (CNNs) and character-level long short-term memory
networks (LSTMs) to classify domains. However, these classifiers perform poorly
with wordlist-based DGA families, which generate domains by pseudorandomly
concatenating dictionary words. We propose a novel approach that combines
context-sensitive word embeddings with a simple fully-connected classifier to
perform classification of domains based on word-level information. The word
embeddings were pre-trained on a large unrelated corpus and left frozen during
the training on domain data. The resulting small number of trainable parameters
enabled extremely short training durations, while the transfer of language
knowledge stored in the representations allowed for high-performing models with
small training datasets. We show that this architecture reliably outperformed
existing techniques on wordlist-based DGA families with just 30 DGA training
examples and achieved state-of-the-art performance with around 100 DGA training
examples, all while requiring an order of magnitude less time to train compared
to current techniques. Of special note is the technique's performance on the
matsnu DGA: the classifier attained a 89.5% detection rate with a 1:1,000 false
positive rate (FPR) after training on only 30 examples of the DGA domains, and
a 91.2% detection rate with a 1:10,000 FPR after 90 examples. Considering that
some of these DGAs have wordlists of several hundred words, our results
demonstrate that this technique does not rely on the classifier learning the
DGA wordlists. Instead, the classifier is able to learn the semantic signatures
of the wordlist-based DGA families.Comment: 6 pages, 5 figures, 2 table
- …