137 research outputs found

    Determinants of quality, latency, and amount of Stack Overflow answers about recent Android APIs.

    Get PDF
    Stack Overflow is a popular crowdsourced question and answer website for programming-related issues. It is an invaluable resource for software developers; on average, questions posted there get answered in minutes to an hour. Questions about well established topics, e.g., the coercion operator in C++, or the difference between canonical and class names in Java, get asked often in one form or another, and answered very quickly. On the other hand, questions on previously unseen or niche topics take a while to get a good answer. This is particularly the case with questions about current updates to or the introduction of new application programming interfaces (APIs). In a hyper-competitive online market, getting good answers to current programming questions sooner could increase the chances of an app getting released and used. So, can developers anyhow, e.g., hasten the speed to good answers to questions about new APIs? Here, we empirically study Stack Overflow questions pertaining to new Android APIs and their associated answers. We contrast the interest in these questions, their answer quality, and timeliness of their answers to questions about old APIs. We find that Stack Overflow answerers in general prioritize with respect to currentness: questions about new APIs do get more answers, but good quality answers take longer. We also find that incentives in terms of question bounties, if used appropriately, can significantly shorten the time and increase answer quality. Interestingly, no operationalization of bounty amount shows significance in our models. In practice, our findings confirm the value of bounties in enhancing expert participation. In addition, they show that the Stack Overflow style of crowdsourcing, for all its glory in providing answers about established programming knowledge, is less effective with new API questions

    The Sky is Not Falling, Todd Newman: The Ninth Circuit Endorses a Measured Reading of \u3cem\u3eNewman\u27s\u3c/em\u3e Definition of Personal Benefit for Insider Trading Liability in \u3cem\u3eUnited States v. Salman\u3c/em\u3e

    Get PDF
    On July 6, 2015, the U.S. Court of Appeals for the Ninth Circuit, in United States v. Salman, declined to adopt the novel definition of the personal-benefit element for insider trading, as articulated by the U.S. Court of Appeals for the Second Circuit in United States v. Newman in December 2014. In so doing, the court’s decision presented the first significant resistance to the longevity of the Newman court’s apparent holding that the personal-benefit element requires proof of a pecuniary exchange in all instances. This Comment argues that the court in Salman correctly declined to extend the Newman personal-benefit definition beyond its facts, that the two cases are reconcilable, and together illustrate the difference between “friends” and family for the purposes of establishing tipper-tippee, insider-trading liability

    Report of the Marine Protected Areas Working Group meeting, Penang, Malaysia, 11-12 February, 2014

    Get PDF
    The objectives of the workshop were to review and update Marine Protected Area (MPA) data, finalise policy briefs for each country and recommend future actions and policies for sustainable management of MPAs

    Report of the MPA Atlas and the interactive online database portal of the Bay of Bengal Large Marine Ecosystem

    Get PDF
    This report describes the process and details of developing an interactive online database portal for the BOBLME region.The MPA (Marine Protected Area) Atlas website, created by WorldFish was designed to provide public access to the latest information relevant to marine scientists, managers and conservationists. The main features include; BOBLME MPA database;interactive geospatial maps;and information about important habitats such as coral reefs,BOBLME boundaries and bathymetry

    A survey of the use of crowdsourcing in software engineering

    Get PDF
    The term 'crowdsourcing' was initially introduced in 2006 to describe an emerging distributed problem-solving model by online workers. Since then it has been widely studied and practiced to support software engineering. In this paper we provide a comprehensive survey of the use of crowdsourcing in software engineering, seeking to cover all literature on this topic. We first review the definitions of crowdsourcing and derive our definition of Crowdsourcing Software Engineering together with its taxonomy. Then we summarise industrial crowdsourcing practice in software engineering and corresponding case studies. We further analyse the software engineering domains, tasks and applications for crowdsourcing and the platforms and stakeholders involved in realising Crowdsourced Software Engineering solutions. We conclude by exposing trends, open issues and opportunities for future research on Crowdsourced Software Engineering

    I\u27ll Know It When I See It...I Think : United States v. Newman and Insider Trading Legislation

    Get PDF
    The Second Circuit\u27s decision in United States v. Newman has reinvigorated an important and longstanding debate about insider trading-whether insider trading should be explicitly prohibited by statute. In response to the Second Circuit\u27s decision, Congress introduced three bills to codify insider trading liability. Each bill takes a different approach to codifying insider trading liability. Between the three bills, two general approaches emerged. One approach is to impose a broad prohibition on insider trading that arguably leaves the existing insider trading regime untouched. The second approach develops a narrower, carefully delineated standard of liability that departs from the current insider trading regime in important ways. Both approaches deserve careful scrutiny if Congress decides to move forward with codifying insider trading liability by statute. First, to provide a foundation, this Comment briefly traces the judicial development of insider trading liability through the U.S. Supreme Court\u27s previous decisions on insider trading. Next, this Comment discusses United States v. Newman and the executive and judicial responses to that decision. This Comment then discusses the need for codifying insider trading liability by statute and the potential benefits of codification. Next, a careful analysis of each bill identifies its strengths and weaknesses. Even small differences between bills impose vastly different standards of liability and provide varying levels of guidance for market actors, prosecutors, and the courts. Finally, this Comment proposes changes to the bills\u27 established frameworks and outlines other considerations Congress should consider if it decides to codify insider trading liability by statute

    Simultaneous regression and classification for drug sensitivity prediction using an advanced random forest method

    Get PDF
    Machine learning methods trained on cancer cell line panels are intensively studied for the prediction of optimal anti-cancer therapies. While classifcation approaches distinguish efective from inefective drugs, regression approaches aim to quantify the degree of drug efectiveness. However, the high specifcity of most anti-cancer drugs induces a skewed distribution of drug response values in favor of the more drug-resistant cell lines, negatively afecting the classifcation performance (class imbalance) and regression performance (regression imbalance) for the sensitive cell lines. Here, we present a novel approach called SimultAneoUs Regression and classifcatiON Random Forests (SAURON-RF) based on the idea of performing a joint regression and classifcation analysis. We demonstrate that SAURON-RF improves the classifcation and regression performance for the sensitive cell lines at the expense of a moderate loss for the resistant ones. Furthermore, our results show that simultaneous classifcation and regression can be superior to regression or classifcation alone

    TF-IDF Inspired Detection for Cross-Language Source Code Plagiarism and Collusion

    Get PDF
    Several computing courses allow students to choose which programming language they want to use for completing a programming task. This can lead to cross-language code plagiarism and collusion, in which the copied code file is rewritten in another programming language. In response to that, this paper proposes a detection technique which is able to accurately compare code files written in various programming languages, but with limited effort in accommodating such languages at development stage. The only language-dependent feature used in the technique is source code tokeniser and no code conversion is applied. The impact of coincidental similarity is reduced by applying a TF-IDF inspired weighting, in which rare matches are prioritised. Our evaluation shows that the technique outperforms common techniques in academia for handling language conversion disguises. Further, it is comparable to those techniques when dealing with conventional disguises
    corecore