2,198 research outputs found
A Survey of Prevent and Detect Access Control Vulnerabilities
Broken access control is one of the most common security vulnerabilities in
web applications. These vulnerabilities are the major cause of many data breach
incidents, which result in privacy concern and revenue loss. However,
preventing and detecting access control vulnerabilities proactively in web
applications could be difficult. Currently, these vulnerabilities are actively
detected by bug bounty hunters post-deployment, which creates attack windows
for malicious access. To solve this problem proactively requires security
awareness and expertise from developers, which calls for systematic solutions.
This survey targets to provide a structured overview of approaches that
tackle access control vulnerabilities. It firstly discusses the unique feature
of access control vulnerabilities, then studies the existing works proposed to
tackle access control vulnerabilities in web applications, which span the
spectrum of software development from software design and implementation,
software analysis and testing, and runtime monitoring. At last we discuss the
open problem in this field
The Evolution of Wikipedia's Norm Network
Social norms have traditionally been difficult to quantify. In any particular
society, their sheer number and complex interdependencies often limit a
system-level analysis. One exception is that of the network of norms that
sustain the online Wikipedia community. We study the fifteen-year evolution of
this network using the interconnected set of pages that establish, describe,
and interpret the community's norms. Despite Wikipedia's reputation for
\textit{ad hoc} governance, we find that its normative evolution is highly
conservative. The earliest users create norms that both dominate the network
and persist over time. These core norms govern both content and interpersonal
interactions using abstract principles such as neutrality, verifiability, and
assume good faith. As the network grows, norm neighborhoods decouple
topologically from each other, while increasing in semantic coherence. Taken
together, these results suggest that the evolution of Wikipedia's norm network
is akin to bureaucratic systems that predate the information age.Comment: 22 pages, 9 figures. Matches published version. Data available at
http://bit.ly/wiki_nor
Content blaster : the online show generator
Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 200
Polyglot Semantic Parsing in APIs
Traditional approaches to semantic parsing (SP) work by training individual
models for each available parallel dataset of text-meaning pairs. In this
paper, we explore the idea of polyglot semantic translation, or learning
semantic parsing models that are trained on multiple datasets and natural
languages. In particular, we focus on translating text to code signature
representations using the software component datasets of Richardson and Kuhn
(2017a,b). The advantage of such models is that they can be used for parsing a
wide variety of input natural languages and output programming languages, or
mixed input languages, using a single unified model. To facilitate modeling of
this type, we develop a novel graph-based decoding framework that achieves
state-of-the-art performance on the above datasets, and apply this method to
two other benchmark SP tasks.Comment: accepted for NAACL-2018 (camera ready version
CARBON: A Counterfactual Reasoning based Framework for Neural Code Comprehension Debiasing
Previous studies have demonstrated that code intelligence models are
sensitive to program transformation among which identifier renaming is
particularly easy to apply and effective. By simply renaming one identifier in
source code, the models would output completely different results. The prior
research generally mitigates the problem by generating more training samples.
Such an approach is less than ideal since its effectiveness depends on the
quantity and quality of the generated samples. Different from these studies, we
are devoted to adjusting models for explicitly distinguishing the influence of
identifier names on the results, called naming bias in this paper, and thereby
making the models robust to identifier renaming. Specifically, we formulate the
naming bias with a structural causal model (SCM), and propose a counterfactual
reasoning based framework named CARBON for eliminating the naming bias in
neural code comprehension. CARBON explicitly captures the naming bias through
multi-task learning in the training stage, and reduces the bias by
counterfactual inference in the inference stage. We evaluate CARBON on three
neural code comprehension tasks, including function naming, defect detection
and code classification. Experiment results show that CARBON achieves
relatively better performance (e.g., +0.5% on the function naming task at F1
score) than the baseline models on the original benchmark datasets, and
significantly improvement (e.g., +37.9% on the function naming task at F1
score) on the datasets with identifiers renamed. The proposed framework
provides a causal view for improving the robustness of code intelligence
models
- …