3,823 research outputs found
Predicting Exploitation of Disclosed Software Vulnerabilities Using Open-source Data
Each year, thousands of software vulnerabilities are discovered and reported
to the public. Unpatched known vulnerabilities are a significant security risk.
It is imperative that software vendors quickly provide patches once
vulnerabilities are known and users quickly install those patches as soon as
they are available. However, most vulnerabilities are never actually exploited.
Since writing, testing, and installing software patches can involve
considerable resources, it would be desirable to prioritize the remediation of
vulnerabilities that are likely to be exploited. Several published research
studies have reported moderate success in applying machine learning techniques
to the task of predicting whether a vulnerability will be exploited. These
approaches typically use features derived from vulnerability databases (such as
the summary text describing the vulnerability) or social media posts that
mention the vulnerability by name. However, these prior studies share multiple
methodological shortcomings that inflate predictive power of these approaches.
We replicate key portions of the prior work, compare their approaches, and show
how selection of training and test data critically affect the estimated
performance of predictive models. The results of this study point to important
methodological considerations that should be taken into account so that results
reflect real-world utility
Pitfalls in Language Models for Code Intelligence: A Taxonomy and Survey
Modern language models (LMs) have been successfully employed in source code
generation and understanding, leading to a significant increase in research
focused on learning-based code intelligence, such as automated bug repair, and
test case generation. Despite their great potential, language models for code
intelligence (LM4Code) are susceptible to potential pitfalls, which hinder
realistic performance and further impact their reliability and applicability in
real-world deployment. Such challenges drive the need for a comprehensive
understanding - not just identifying these issues but delving into their
possible implications and existing solutions to build more reliable language
models tailored to code intelligence. Based on a well-defined systematic
research approach, we conducted an extensive literature review to uncover the
pitfalls inherent in LM4Code. Finally, 67 primary studies from top-tier venues
have been identified. After carefully examining these studies, we designed a
taxonomy of pitfalls in LM4Code research and conducted a systematic study to
summarize the issues, implications, current solutions, and challenges of
different pitfalls for LM4Code systems. We developed a comprehensive
classification scheme that dissects pitfalls across four crucial aspects: data
collection and labeling, system design and learning, performance evaluation,
and deployment and maintenance. Through this study, we aim to provide a roadmap
for researchers and practitioners, facilitating their understanding and
utilization of LM4Code in reliable and trustworthy ways
How Effective Are Neural Networks for Fixing Security Vulnerabilities
Security vulnerability repair is a difficult task that is in dire need of
automation. Two groups of techniques have shown promise: (1) large code
language models (LLMs) that have been pre-trained on source code for tasks such
as code completion, and (2) automated program repair (APR) techniques that use
deep learning (DL) models to automatically fix software bugs.
This paper is the first to study and compare Java vulnerability repair
capabilities of LLMs and DL-based APR models. The contributions include that we
(1) apply and evaluate five LLMs (Codex, CodeGen, CodeT5, PLBART and InCoder),
four fine-tuned LLMs, and four DL-based APR techniques on two real-world Java
vulnerability benchmarks (Vul4J and VJBench), (2) design code transformations
to address the training and test data overlapping threat to Codex, (3) create a
new Java vulnerability repair benchmark VJBench, and its transformed version
VJBench-trans and (4) evaluate LLMs and APR techniques on the transformed
vulnerabilities in VJBench-trans.
Our findings include that (1) existing LLMs and APR models fix very few Java
vulnerabilities. Codex fixes 10.2 (20.4%), the most number of vulnerabilities.
(2) Fine-tuning with general APR data improves LLMs' vulnerability-fixing
capabilities. (3) Our new VJBench reveals that LLMs and APR models fail to fix
many Common Weakness Enumeration (CWE) types, such as CWE-325 Missing
cryptographic step and CWE-444 HTTP request smuggling. (4) Codex still fixes
8.3 transformed vulnerabilities, outperforming all the other LLMs and APR
models on transformed vulnerabilities. The results call for innovations to
enhance automated Java vulnerability repair such as creating larger
vulnerability repair training data, tuning LLMs with such data, and applying
code simplification transformation to facilitate vulnerability repair.Comment: This paper has been accepted to appear in the proceedings of the 32nd
ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA
2023), and to be presented at the conference, that will be held in Seattle,
USA, 17-21 July 202
A Survey of Learning-based Automated Program Repair
Automated program repair (APR) aims to fix software bugs automatically and
plays a crucial role in software development and maintenance. With the recent
advances in deep learning (DL), an increasing number of APR techniques have
been proposed to leverage neural networks to learn bug-fixing patterns from
massive open-source code repositories. Such learning-based techniques usually
treat APR as a neural machine translation (NMT) task, where buggy code snippets
(i.e., source language) are translated into fixed code snippets (i.e., target
language) automatically. Benefiting from the powerful capability of DL to learn
hidden relationships from previous bug-fixing datasets, learning-based APR
techniques have achieved remarkable performance. In this paper, we provide a
systematic survey to summarize the current state-of-the-art research in the
learning-based APR community. We illustrate the general workflow of
learning-based APR techniques and detail the crucial components, including
fault localization, patch generation, patch ranking, patch validation, and
patch correctness phases. We then discuss the widely-adopted datasets and
evaluation metrics and outline existing empirical studies. We discuss several
critical aspects of learning-based APR techniques, such as repair domains,
industrial deployment, and the open science issue. We highlight several
practical guidelines on applying DL techniques for future APR studies, such as
exploring explainable patch generation and utilizing code features. Overall,
our paper can help researchers gain a comprehensive understanding about the
achievements of the existing learning-based APR techniques and promote the
practical application of these techniques. Our artifacts are publicly available
at \url{https://github.com/QuanjunZhang/AwesomeLearningAPR}
SoK:Prudent Evaluation Practices for Fuzzing
Fuzzing has proven to be a highly effective approach to uncover software bugs over the past decade. After AFL popularized the groundbreaking concept of lightweight coverage feedback, the field of fuzzing has seen a vast amount of scientific work proposing new techniques, improving methodological aspects of existing strategies, or porting existing methods to new domains. All such work must demonstrate its merit by showing its applicability to a problem, measuring its performance, and often showing its superiority over existing works in a thorough, empirical evaluation. Yet, fuzzing is highly sensitive to its target, environment, and circumstances, e.g., randomness in the testing process. After all, relying on randomness is one of the core principles of fuzzing, governing many aspects of a fuzzer's behavior. Combined with the often highly difficult to control environment, the reproducibility of experiments is a crucial concern and requires a prudent evaluation setup. To address these threats to validity, several works, most notably Evaluating Fuzz Testing by Klees et al., have outlined how a carefully designed evaluation setup should be implemented, but it remains unknown to what extent their recommendations have been adopted in practice. In this work, we systematically analyze the evaluation of 150 fuzzing papers published at the top venues between 2018 and 2023. We study how existing guidelines are implemented and observe potential shortcomings and pitfalls. We find a surprising disregard of the existing guidelines regarding statistical tests and systematic errors in fuzzing evaluations. For example, when investigating reported bugs, we find that the search for vulnerabilities in real-world software leads to authors requesting and receiving CVEs of questionable quality. Extending our literature analysis to the practical domain, we attempt to reproduce claims of eight fuzzing papers. These case studies allow us to assess the practical reproducibility of fuzzing research and identify archetypal pitfalls in the evaluation design. Unfortunately, our reproduced results reveal several deficiencies in the studied papers, and we are unable to fully support and reproduce the respective claims. To help the field of fuzzing move toward a scientifically reproducible evaluation strategy, we propose updated guidelines for conducting a fuzzing evaluation that future work should follow
- …