36,304 research outputs found

    {HyGen}: {G}enerating Random Graphs with Hyperbolic Communities

    No full text

    Stack Overflow in Github: Any Snippets There?

    Full text link
    When programmers look for how to achieve certain programming tasks, Stack Overflow is a popular destination in search engine results. Over the years, Stack Overflow has accumulated an impressive knowledge base of snippets of code that are amply documented. We are interested in studying how programmers use these snippets of code in their projects. Can we find Stack Overflow snippets in real projects? When snippets are used, is this copy literal or does it suffer adaptations? And are these adaptations specializations required by the idiosyncrasies of the target artifact, or are they motivated by specific requirements of the programmer? The large-scale study presented on this paper analyzes 909k non-fork Python projects hosted on Github, which contain 290M function definitions, and 1.9M Python snippets captured in Stack Overflow. Results are presented as quantitative analysis of block-level code cloning intra and inter Stack Overflow and GitHub, and as an analysis of programming behaviors through the qualitative analysis of our findings.Comment: 14th International Conference on Mining Software Repositories, 11 page

    Challenges in identifying and interpreting organizational modules in morphology

    Get PDF
    Form is a rich concept that agglutinates information about the proportions and topological arrangement of body parts. Modularity is readily measurable in both features, the variation of proportions (variational modules) and the organization of topology (organizational modules). The study of variational modularity and of organizational modularity faces similar challenges regarding the identification of meaningful modules and the validation of generative processes; however, most studies in morphology focus solely on variational modularity, while organizational modularity is much less understood. A possible cause for this bias is the successful development in the last twenty years of morphometrics, and specially geometric morphometrics, to study patters of variation. This contrasts with the lack of a similar mathematical framework to deal with patterns of organization. Recently, a new mathematical framework has been proposed to study the organization of gross anatomy using tools from Network Theory, so‐called Anatomical Network Analysis (AnNA). In this essay, I explore the potential use of this new framework—and the challenges it faces in identifying and validating biologically meaningful modules in morphological systems—by providing working examples of a complete analysis of modularity of the human skull and upper limb. Finally, I suggest further directions of research that may bridge the gap between variational and organizational modularity studies, and discuss how alternative modeling strategies of morphological systems using networks can benefit from each other

    Identifying Roles of Software Developers from their Answers on Stack Overflow

    Get PDF
    Stack Overflow is the world’s largest community of software developers. Users ask and answer questions on various tagged topics of software development. The set of questions a site user answers is representative of their knowledge base, or “wheelhouse”. It is proposed that clustering users by their wheelhouse yields communities of similar software developers by skill-set. These communities represent the different roles within software development and could be used as the basis to define roles at any point in time in an ever-evolving landscape of software development. A network graph of site users, linked if they answered questions on the same topic, was created. Eight distinct communities were identified using the Louvain method. The modularity of this set of communities was 0.46, indicating the presence of community structure that is unlikely to occur randomly. This partition was validated with the results of previous research that used data from the same time period. By extracting the top 5 tags from each identified community, the harmonic F1-score between the communities and the external dataset was found to be 0.75. It was statistically proven with 95% confidence that the communities identified were not identical to the results from the previous research. Nonetheless, there exists a strong similarity to the previous research. Hence, it was suggested that Stack Overflow data could be used to identify and define roles within software development. Upon applying this method to 2021 data, a previously unknown community of experts in R, C and Rust was identified. The method used in this research could be applied directly to any of the 177 Stack Exchange sites and could be used to form the basis of job roles for a wide range of industries

    Benefit Transfer: Choice Experiment Results

    Get PDF
    Benefit transfer entails using estimates of non-market values derived at one site as approximations to benefits at other sites. The method finds favour because it can be applied quickly and cheaply, however the validity of benefit transfer is frequently questioned. Published studies generally indicate that errors from the approach can be extremely large and could result in significant resource misallocations. Assessing the validity of benefit transfer is complicated by differences in the nature of study and policy sites, the changes being valued, valuation methods, time of study, availability of substitutes and complements, and demographic, social and cultural differences. A choice experiment was used to evaluate the transferability of benefit estimates for identical goods between two different populations. The study design allowed most of the confounding factors to be controlled, so provides a strong test of benefit transfer validity. Several different tests were applied to evaluate benefit transfer validity, with conflicting results. The paper investigates the merits of the alternative tests and concludes that utility functions were different for the two populations.Choice model, Choice experiment, Benefit transfer, Mitigation, Agricultural and Food Policy, Environmental Economics and Policy, Financial Economics, Land Economics/Use, Research Methods/ Statistical Methods, Resource /Energy Economics and Policy,

    Determinants of quality, latency, and amount of Stack Overflow answers about recent Android APIs.

    Get PDF
    Stack Overflow is a popular crowdsourced question and answer website for programming-related issues. It is an invaluable resource for software developers; on average, questions posted there get answered in minutes to an hour. Questions about well established topics, e.g., the coercion operator in C++, or the difference between canonical and class names in Java, get asked often in one form or another, and answered very quickly. On the other hand, questions on previously unseen or niche topics take a while to get a good answer. This is particularly the case with questions about current updates to or the introduction of new application programming interfaces (APIs). In a hyper-competitive online market, getting good answers to current programming questions sooner could increase the chances of an app getting released and used. So, can developers anyhow, e.g., hasten the speed to good answers to questions about new APIs? Here, we empirically study Stack Overflow questions pertaining to new Android APIs and their associated answers. We contrast the interest in these questions, their answer quality, and timeliness of their answers to questions about old APIs. We find that Stack Overflow answerers in general prioritize with respect to currentness: questions about new APIs do get more answers, but good quality answers take longer. We also find that incentives in terms of question bounties, if used appropriately, can significantly shorten the time and increase answer quality. Interestingly, no operationalization of bounty amount shows significance in our models. In practice, our findings confirm the value of bounties in enhancing expert participation. In addition, they show that the Stack Overflow style of crowdsourcing, for all its glory in providing answers about established programming knowledge, is less effective with new API questions

    Identifying Impact Factors of Question Quality in Online Health Q&A Communities: an Empirical Analysis on MedHelp

    Get PDF
    Online health Q&A communities help patients, doctors and other users conveniently search and share healthcare information online and have gained much popularity all over the world. Good-quality questions that raise massive discussions could trigger users’ engagement online, which is beneficial for platform operation. However, little attention has been paid to the antecedents of question quality in online health Q&A communities. To have a deep investigation of healthcare question quality, this research aims to investigate the impact factors from two special aspects that are neglected in previous research, i.e., user’s structural influence and questions’ sentiment. Using a dataset collected from MedHelp, one of the largest online health Q&A communities, we found that users with high structural influences and questions with negative sentiment have positive associations with the answer number of questions. Our research would offer meaningful suggestions to platform managers and users
    • 

    corecore