336 research outputs found
An Evasion Attack against ML-based Phishing URL Detectors
Background: Over the year, Machine Learning Phishing URL classification
(MLPU) systems have gained tremendous popularity to detect phishing URLs
proactively. Despite this vogue, the security vulnerabilities of MLPUs remain
mostly unknown. Aim: To address this concern, we conduct a study to understand
the test time security vulnerabilities of the state-of-the-art MLPU systems,
aiming at providing guidelines for the future development of these systems.
Method: In this paper, we propose an evasion attack framework against MLPU
systems. To achieve this, we first develop an algorithm to generate adversarial
phishing URLs. We then reproduce 41 MLPU systems and record their baseline
performance. Finally, we simulate an evasion attack to evaluate these MLPU
systems against our generated adversarial URLs. Results: In comparison to
previous works, our attack is: (i) effective as it evades all the models with
an average success rate of 66% and 85% for famous (such as Netflix, Google) and
less popular phishing targets (e.g., Wish, JBHIFI, Officeworks) respectively;
(ii) realistic as it requires only 23ms to produce a new adversarial URL
variant that is available for registration with a median cost of only
$11.99/year. We also found that popular online services such as Google
SafeBrowsing and VirusTotal are unable to detect these URLs. (iii) We find that
Adversarial training (successful defence against evasion attack) does not
significantly improve the robustness of these systems as it decreases the
success rate of our attack by only 6% on average for all the models. (iv)
Further, we identify the security vulnerabilities of the considered MLPU
systems. Our findings lead to promising directions for future research.
Conclusion: Our study not only illustrate vulnerabilities in MLPU systems but
also highlights implications for future study towards assessing and improving
these systems.Comment: Draft for ACM TOP
Understanding the Heterogeneity of Contributors in Bug Bounty Programs
Background: While bug bounty programs are not new in software development, an
increasing number of companies, as well as open source projects, rely on
external parties to perform the security assessment of their software for
reward. However, there is relatively little empirical knowledge about the
characteristics of bug bounty program contributors. Aim: This paper aims to
understand those contributors by highlighting the heterogeneity among them.
Method: We analyzed the histories of 82 bug bounty programs and 2,504 distinct
bug bounty contributors, and conducted a quantitative and qualitative survey.
Results: We found that there are project-specific and non-specific contributors
who have different motivations for contributing to the products and
organizations. Conclusions: Our findings provide insights to make bug bounty
programs better and for further studies of new software development roles.Comment: 6 pages, ESEM 201
The 2004 UTfit Collaboration Report on the Status of the Unitarity Triangle in the Standard Model
Using the latest determinations of several theoretical and experimental
parameters, we update the Unitarity Triangle analysis in the Standard Model.
The basic experimental constraints come from the measurements of |V_ub/V_cb|,
Delta M_d, the lower limit on Delta M_s, epsilon_k, and the measurement of the
phase of the B_d - anti B_d mixing amplitude through the time-dependent CP
asymmetry in B^0 to J/psi K^0 decays. In addition, we consider the direct
determination of alpha, gamma, 2 beta + gamma and cos(2 beta) from the
measurements of new CP-violating quantities, recently performed at the B
factories. We also discuss the opportunities offered by improving the precision
of the various physical quantities entering in the determination of the
Unitarity Triangle parameters. The results and the plots presented in this
paper can also be found at http://www.utfit.org, where they are continuously
updated with the newest experimental and theoretical results.Comment: 32 pages, 17 figures. High resolution figures and updates can be
found at http://www.utfit.org v2: misprints correcte
Interpretability and Transparency-Driven Detection and Transformation of Textual Adversarial Examples (IT-DT)
Transformer-based text classifiers like BERT, Roberta, T5, and GPT-3 have
shown impressive performance in NLP. However, their vulnerability to
adversarial examples poses a security risk. Existing defense methods lack
interpretability, making it hard to understand adversarial classifications and
identify model vulnerabilities. To address this, we propose the
Interpretability and Transparency-Driven Detection and Transformation (IT-DT)
framework. It focuses on interpretability and transparency in detecting and
transforming textual adversarial examples. IT-DT utilizes techniques like
attention maps, integrated gradients, and model feedback for interpretability
during detection. This helps identify salient features and perturbed words
contributing to adversarial classifications. In the transformation phase, IT-DT
uses pre-trained embeddings and model feedback to generate optimal replacements
for perturbed words. By finding suitable substitutions, we aim to convert
adversarial examples into non-adversarial counterparts that align with the
model's intended behavior while preserving the text's meaning. Transparency is
emphasized through human expert involvement. Experts review and provide
feedback on detection and transformation results, enhancing decision-making,
especially in complex scenarios. The framework generates insights and threat
intelligence empowering analysts to identify vulnerabilities and improve model
robustness. Comprehensive experiments demonstrate the effectiveness of IT-DT in
detecting and transforming adversarial examples. The approach enhances
interpretability, provides transparency, and enables accurate identification
and successful transformation of adversarial inputs. By combining technical
analysis and human expertise, IT-DT significantly improves the resilience and
trustworthiness of transformer-based text classifiers against adversarial
attacks
- …