5,331 research outputs found
Determinants of quality, latency, and amount of Stack Overflow answers about recent Android APIs.
Stack Overflow is a popular crowdsourced question and answer website for programming-related issues. It is an invaluable resource for software developers; on average, questions posted there get answered in minutes to an hour. Questions about well established topics, e.g., the coercion operator in C++, or the difference between canonical and class names in Java, get asked often in one form or another, and answered very quickly. On the other hand, questions on previously unseen or niche topics take a while to get a good answer. This is particularly the case with questions about current updates to or the introduction of new application programming interfaces (APIs). In a hyper-competitive online market, getting good answers to current programming questions sooner could increase the chances of an app getting released and used. So, can developers anyhow, e.g., hasten the speed to good answers to questions about new APIs? Here, we empirically study Stack Overflow questions pertaining to new Android APIs and their associated answers. We contrast the interest in these questions, their answer quality, and timeliness of their answers to questions about old APIs. We find that Stack Overflow answerers in general prioritize with respect to currentness: questions about new APIs do get more answers, but good quality answers take longer. We also find that incentives in terms of question bounties, if used appropriately, can significantly shorten the time and increase answer quality. Interestingly, no operationalization of bounty amount shows significance in our models. In practice, our findings confirm the value of bounties in enhancing expert participation. In addition, they show that the Stack Overflow style of crowdsourcing, for all its glory in providing answers about established programming knowledge, is less effective with new API questions
Assessing Code Authorship: The Case of the Linux Kernel
Code authorship is a key information in large-scale open source systems.
Among others, it allows maintainers to assess division of work and identify key
collaborators. Interestingly, open-source communities lack guidelines on how to
manage authorship. This could be mitigated by setting to build an empirical
body of knowledge on how authorship-related measures evolve in successful
open-source communities. Towards that direction, we perform a case study on the
Linux kernel. Our results show that: (a) only a small portion of developers (26
%) makes significant contributions to the code base; (b) the distribution of
the number of files per author is highly skewed --- a small group of top
authors (3 %) is responsible for hundreds of files, while most authors (75 %)
are responsible for at most 11 files; (c) most authors (62 %) have a specialist
profile; (d) authors with a high number of co-authorship connections tend to
collaborate with others with less connections.Comment: Accepted at 13th International Conference on Open Source Systems
(OSS). 12 page
The Effect of Security Education and Expertise on Security Assessments: the Case of Software Vulnerabilities
In spite of the growing importance of software security and the industry
demand for more cyber security expertise in the workforce, the effect of
security education and experience on the ability to assess complex software
security problems has only been recently investigated. As proxy for the full
range of software security skills, we considered the problem of assessing the
severity of software vulnerabilities by means of a structured analysis
methodology widely used in industry (i.e. the Common Vulnerability Scoring
System (\CVSS) v3), and designed a study to compare how accurately individuals
with background in information technology but different professional experience
and education in cyber security are able to assess the severity of software
vulnerabilities. Our results provide some structural insights into the complex
relationship between education or experience of assessors and the quality of
their assessments. In particular we find that individual characteristics matter
more than professional experience or formal education; apparently it is the
\emph{combination} of skills that one owns (including the actual knowledge of
the system under study), rather than the specialization or the years of
experience, to influence more the assessment quality. Similarly, we find that
the overall advantage given by professional expertise significantly depends on
the composition of the individual security skills as well as on the available
information.Comment: Presented at the Workshop on the Economics of Information Security
(WEIS 2018), Innsbruck, Austria, June 201
How to Ask for Technical Help? Evidence-based Guidelines for Writing Questions on Stack Overflow
Context: The success of Stack Overflow and other community-based
question-and-answer (Q&A) sites depends mainly on the will of their members to
answer others' questions. In fact, when formulating requests on Q&A sites, we
are not simply seeking for information. Instead, we are also asking for other
people's help and feedback. Understanding the dynamics of the participation in
Q&A communities is essential to improve the value of crowdsourced knowledge.
Objective: In this paper, we investigate how information seekers can increase
the chance of eliciting a successful answer to their questions on Stack
Overflow by focusing on the following actionable factors: affect, presentation
quality, and time.
Method: We develop a conceptual framework of factors potentially influencing
the success of questions in Stack Overflow. We quantitatively analyze a set of
over 87K questions from the official Stack Overflow dump to assess the impact
of actionable factors on the success of technical requests. The information
seeker reputation is included as a control factor. Furthermore, to understand
the role played by affective states in the success of questions, we
qualitatively analyze questions containing positive and negative emotions.
Finally, a survey is conducted to understand how Stack Overflow users perceive
the guideline suggestions for writing questions.
Results: We found that regardless of user reputation, successful questions
are short, contain code snippets, and do not abuse with uppercase characters.
As regards affect, successful questions adopt a neutral emotional style.
Conclusion: We provide evidence-based guidelines for writing effective
questions on Stack Overflow that software engineers can follow to increase the
chance of getting technical help. As for the role of affect, we empirically
confirmed community guidelines that suggest avoiding rudeness in question
writing.Comment: Preprint, to appear in Information and Software Technolog
- …