19,591 research outputs found

    Can Who-Edits-What Predict Edit Survival?

    Get PDF
    As the number of contributors to online peer-production systems grows, it becomes increasingly important to predict whether the edits that users make will eventually be beneficial to the project. Existing solutions either rely on a user reputation system or consist of a highly specialized predictor that is tailored to a specific peer-production system. In this work, we explore a different point in the solution space that goes beyond user reputation but does not involve any content-based feature of the edits. We view each edit as a game between the editor and the component of the project. We posit that the probability that an edit is accepted is a function of the editor's skill, of the difficulty of editing the component and of a user-component interaction term. Our model is broadly applicable, as it only requires observing data about who makes an edit, what the edit affects and whether the edit survives or not. We apply our model on Wikipedia and the Linux kernel, two examples of large-scale peer-production systems, and we seek to understand whether it can effectively predict edit survival: in both cases, we provide a positive answer. Our approach significantly outperforms those based solely on user reputation and bridges the gap with specialized predictors that use content-based features. It is simple to implement, computationally inexpensive, and in addition it enables us to discover interesting structure in the data.Comment: Accepted at KDD 201

    AI for the Common Good?! Pitfalls, challenges, and Ethics Pen-Testing

    Full text link
    Recently, many AI researchers and practitioners have embarked on research visions that involve doing AI for "Good". This is part of a general drive towards infusing AI research and practice with ethical thinking. One frequent theme in current ethical guidelines is the requirement that AI be good for all, or: contribute to the Common Good. But what is the Common Good, and is it enough to want to be good? Via four lead questions, I will illustrate challenges and pitfalls when determining, from an AI point of view, what the Common Good is and how it can be enhanced by AI. The questions are: What is the problem / What is a problem?, Who defines the problem?, What is the role of knowledge?, and What are important side effects and dynamics? The illustration will use an example from the domain of "AI for Social Good", more specifically "Data Science for Social Good". Even if the importance of these questions may be known at an abstract level, they do not get asked sufficiently in practice, as shown by an exploratory study of 99 contributions to recent conferences in the field. Turning these challenges and pitfalls into a positive recommendation, as a conclusion I will draw on another characteristic of computer-science thinking and practice to make these impediments visible and attenuate them: "attacks" as a method for improving design. This results in the proposal of ethics pen-testing as a method for helping AI designs to better contribute to the Common Good.Comment: to appear in Paladyn. Journal of Behavioral Robotics; accepted on 27-10-201

    Recommender Systems Based on Deep Learning Techniques

    Get PDF
    Tese de mestrado em Ciência de Dados, Universidade de Lisboa, Faculdade de Ciências, 2020O atual aumento do número de opções disponíveis aquando a tomada de uma decisão, faz com que vários indivíduos se sintam sobrecarregados, o que origina experiências de utilização frustrantes e demoradas. Sistemas de Recomendação são ferramentas fundamentais para a mitigação deste acontecimento, ao remover certas alternativas que provavelmente serão irrelevantes para cada indivíduo. Desenvolver estes sistemas apresenta vários desafios, tornando-se assim uma tarefa de difícil realização. Para tal, vários sistemas (frameworks) para facilitar estes desenvolvimentos foram propostos, ajudando assim a reduzir os custos de desenvolvimento, através da oferta de ferramentas reutilizáveis, tal como implementações de estratégias comuns e modelos populares. Contudo, ainda é difícil encontrar um sistema (framework) que também ofereça uma abstração completa na conversão de conjuntos de dados, suporte para abordagens baseadas em aprendizagem profunda, modelos extensíveis, e avaliações reproduzíveis. Este trabalho introduz o DRecPy, um novo sistema (framework) que oferece vários módulos para evitar trabalho de desenvolvimento repetitivo, mas também para auxiliar os praticantes nos desafios mencionados anteriormente. O DRecPy contém módulos para lidar com: tarefas de carregar e converter conjuntos de dados; divisão de conjuntos de dados para treino, validação e teste de modelos; amostragem de pontos de dados através de estratégias distintas; criação de sistemas de recomendação complexos e extensíveis, ao seguir uma estrutura de modelo definida mas flexível; juntamente com vários processos de avaliação que originam resultados determinísticos por padrão. Para avaliar este novo sistema (framework), a sua consistência é analisada através da comparação dos resultados produzidos, com os resultados publicados na literatura. Para mostrar que o DRecPy pode ser uma ferramenta valiosa para a comunidade de sistemas de recomendação, várias características são também avaliadas e comparadas com ferramentas existentes, tais como extensibilidade, reutilização e reprodutibilidade.The current increase in available options makes individuals feel overwhelmed whenever facing a decision, resulting in a frustrating and time-consuming user experience. Recommender systems are a fundamental tool to solve this issue, filtering out the options that are most likely to be irrelevant for each person. Developing these systems presents us with a vast number of challenges, making it a difficult task to accomplish. To this end, various frameworks to aid their development have been proposed, helping reducing development costs by offering reusable tools, as well as implementations of common strategies and popular models. However, it is still hard to find a framework that also provides full abstraction over data set conversion, support for deep learning-based approaches, extensible models, and reproducible evaluations. This work introduces DRecPy, a novel framework that not only provides several modules to avoid repetitive development work, but also to assist practitioners with the above challenges. DRecPy contains modules to deal with: data set import and conversion tasks; splitting data sets for model training, validation, and testing; sampling data points using distinct strategies; creating extensible and complex recommenders, by following a defined but flexible model structure; together with many evaluation procedures that provide deterministic results by default. To evaluate this new framework, its consistency is analyzed by comparing the results generated by DRecPy against the results published by others using the same algorithms. Also, to show that DRecPy can be a valuable tool for the recommender systems’ community, several framework characteristics are evaluated and compared against existing tools, such as extensibility, reusability, and reproducibility

    Deep Learning Framework for Online Interactive Service Recommendation in Iterative Mashup Development

    Full text link
    Recent years have witnessed the rapid development of service-oriented computing technologies. The boom of Web services increases the selection burden of software developers in developing service-based systems (such as mashups). How to recommend suitable follow-up component services to develop new mashups has become a fundamental problem in service-oriented software engineering. Most of the existing service recommendation approaches are designed for mashup development in the single-round recommendation scenario. It is hard for them to update recommendation results in time according to developers' requirements and behaviors (e.g., instant service selection). To address this issue, we propose a deep-learning-based interactive service recommendation framework named DLISR, which aims to capture the interactions among the target mashup, selected services, and the next service to recommend. Moreover, an attention mechanism is employed in DLISR to weigh selected services when recommending the next service. We also design two separate models for learning interactions from the perspectives of content information and historical invocation information, respectively, as well as a hybrid model called HISR. Experiments on a real-world dataset indicate that HISR outperforms several state-of-the-art service recommendation methods in the online interactive scenario for developing new mashups iteratively.Comment: 15 pages, 6 figures, and 3 table

    Professionalism, Golf Coaching and a Master of Science Degree: A commentary

    Get PDF
    As a point of reference I congratulate Simon Jenkins on tackling the issue of professionalism in coaching. As he points out coaching is not a profession, but this does not mean that coaching would not benefit from going through a professionalization process. As things stand I find that the stimulus article unpacks some critically important issues of professionalism, broadly within the context of golf coaching. However, I am not sure enough is made of understanding what professional (golf) coaching actually is nor how the development of a professional golf coach can be facilitated by a Master of Science Degree (M.Sc.). I will focus my commentary on these two issues

    Classifying the Correctness of Generated White-Box Tests: An Exploratory Study

    Full text link
    White-box test generator tools rely only on the code under test to select test inputs, and capture the implementation's output as assertions. If there is a fault in the implementation, it could get encoded in the generated tests. Tool evaluations usually measure fault-detection capability using the number of such fault-encoding tests. However, these faults are only detected, if the developer can recognize that the encoded behavior is faulty. We designed an exploratory study to investigate how developers perform in classifying generated white-box test as faulty or correct. We carried out the study in a laboratory setting with 54 graduate students. The tests were generated for two open-source projects with the help of the IntelliTest tool. The performance of the participants were analyzed using binary classification metrics and by coding their observed activities. The results showed that participants incorrectly classified a large number of both fault-encoding and correct tests (with median misclassification rate 33% and 25% respectively). Thus the real fault-detection capability of test generators could be much lower than typically reported, and we suggest to take this human factor into account when evaluating generated white-box tests.Comment: 13 pages, 7 figure

    Static Analysis of Android Secure Application Development Process with FindSecurityBugs

    Get PDF
    Mobile devices have been growing more and more powerful in recent decades, evolving from a simple device for SMS messages and phone calls to a smart device that can install third party apps. People are becoming more heavily reliant on their mobile devices. Due to this increase in usage, security threats to mobile applications are also growing explosively. Mobile app flaws and security defects can provide opportunities for hackers to break into them and access sensitive information. Defensive coding needs to be an integral part of coding practices to improve the security of our code. We need to consider data protection earlier, to verify security early in the development lifecycle, rather than fixing the security holes after malicious attacks and data leaks take place. Early elimination of known security vulnerabilities will help us increase the security of our software, reduce the vulnerabilities in the programs, and mitigate the consequences and damage caused by potential malicious attacks. However, many software developer professionals lack the necessary security knowledge and skills at the development stage, and secure mobile software development is not yet well represented in most schools\u27 computing curriculum. In this paper, we present a static security analysis approach with the FindSecurityBugs plugin for Android secure mobile software development based on OWASP mobile security recommendations to promote secure mobile software development education and meet the emerging industrial and educational needs
    corecore