129 research outputs found
How to Compare the Scientific Contributions between Research Groups
We present a method to analyse the scientific contributions between research
groups. Given multiple research groups, we construct their journal/proceeding
graphs and then compute the similarity/gap between them using network analysis.
This analysis can be used for measuring similarity/gap of the topics/qualities
between research groups' scientific contributions. We demonstrate the
practicality of our method by comparing the scientific contributions by Korean
researchers with those by the global researchers for information security in
2006 - 2008. The empirical analysis shows that the current security research in
South Korea has been isolated from the global research trend
Hybrid Spam Filtering for Mobile Communication
Spam messages are an increasing threat to mobile communication. Several
mitigation techniques have been proposed, including white and black listing,
challenge-response and content-based filtering. However, none are perfect and
it makes sense to use a combination rather than just one. We propose an
anti-spam framework based on the hybrid of content-based filtering and
challenge-response. There is the trade-offs between accuracy of anti-spam
classifiers and the communication overhead. Experimental results show how,
depending on the proportion of spam messages, different filtering %%@
parameters should be set.Comment: 6 pages, 5 figures, 1 tabl
Crumbled Cookie Exploring E-commerce Websites Cookie Policies with Data Protection Regulations
Despite stringent data protection regulations such as the General Data
Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and
other country-specific regulations, many websites continue to use cookies to
track user activities. Recent studies have revealed several data protection
violations, resulting in significant penalties, especially for multinational
corporations. Motivated by the question of why these data protection violations
continue to occur despite strong data protection regulations, we examined 360
popular e-commerce websites in multiple countries to analyze whether they
comply with regulations to protect user privacy from a cookie perspective
SingleADV: Single-Class Target-Specific Attack Against Interpretable Deep Learning Systems
In this paper, we present a novel Single-class target-specific Adversarial attack called SingleADV. The goal of SingleADV is to generate a universal perturbation that deceives the target model into confusing a specific category of objects with a target category while ensuring highly relevant and accurate interpretations. The universal perturbation is stochastically and iteratively optimized by minimizing the adversarial loss that is designed to consider both the classifier and interpreter costs in targeted and non-targeted categories. In this optimization framework, ruled by the first- and second-moment estimations, the desired loss surface promotes high confidence and interpretation score of adversarial samples. By avoiding unintended misclassification of samples from other categories, SingleADV enables more effective targeted attacks on interpretable deep learning systems in both white-box and black-box scenarios. To evaluate the effectiveness of SingleADV, we conduct experiments using four different model architectures (ResNet-50, VGG-16, DenseNet-169, and Inception-V3) coupled with three interpretation models (CAM, Grad, and MASK). Through extensive empirical evaluation, we demonstrate that SingleADV effectively deceives the target deep learning models and their associated interpreters under various conditions and settings. Our experimental results show that the performance of SingleADV is effective, with an average fooling ratio of 0.74 and an adversarial confidence level of 0.78 in generating deceptive adversarial samples. Furthermore, we discuss several countermeasures against SingleADV, including a transfer-based learning approach and existing preprocessing defenses
Evaluating the Effectiveness and Robustness of Visual Similarity-based Phishing Detection Models
Phishing attacks pose a significant threat to Internet users, with
cybercriminals elaborately replicating the visual appearance of legitimate
websites to deceive victims. Visual similarity-based detection systems have
emerged as an effective countermeasure, but their effectiveness and robustness
in real-world scenarios have been unexplored. In this paper, we comprehensively
scrutinize and evaluate state-of-the-art visual similarity-based anti-phishing
models using a large-scale dataset of 450K real-world phishing websites. Our
analysis reveals that while certain models maintain high accuracy, others
exhibit notably lower performance than results on curated datasets,
highlighting the importance of real-world evaluation. In addition, we observe
the real-world tactic of manipulating visual components that phishing attackers
employ to circumvent the detection systems. To assess the resilience of
existing models against adversarial attacks and robustness, we apply visible
and perturbation-based manipulations to website logos, which adversaries
typically target. We then evaluate the models' robustness in handling these
adversarial samples. Our findings reveal vulnerabilities in several models,
emphasizing the need for more robust visual similarity techniques capable of
withstanding sophisticated evasion attempts. We provide actionable insights for
enhancing the security of phishing defense systems, encouraging proactive
actions. To the best of our knowledge, this work represents the first
large-scale, systematic evaluation of visual similarity-based models for
phishing detection in real-world settings, necessitating the development of
more effective and robust defenses.Comment: 12 page
Expectations Versus Reality: Evaluating Intrusion Detection Systems in Practice
Our paper provides empirical comparisons between recent IDSs to provide an
objective comparison between them to help users choose the most appropriate
solution based on their requirements. Our results show that no one solution is
the best, but is dependent on external variables such as the types of attacks,
complexity, and network environment in the dataset. For example, BoT_IoT and
Stratosphere IoT datasets both capture IoT-related attacks, but the deep neural
network performed the best when tested using the BoT_IoT dataset while HELAD
performed the best when tested using the Stratosphere IoT dataset. So although
we found that a deep neural network solution had the highest average F1 scores
on tested datasets, it is not always the best-performing one. We further
discuss difficulties in using IDS from literature and project repositories,
which complicated drawing definitive conclusions regarding IDS selection.Comment: 10 page
- …