8,060 research outputs found

    Mining team characteristics to predict Wikipedia article quality

    Get PDF
    International audienceIn this study, we were interested in studying which characteristics of virtual teams are good predictors for the quality of their production. The experiment involved obtaining the Spanish Wikipedia database dump and applying different data mining techniques suitable for large data sets to label the whole set of articles according to their quality (comparing them with the Featured/Good Articles, or FA/GA). Then we created the attributes that describe the characteristics of the team who produced the articles and using decision tree methods, we obtained the most relevant characteristics of the teams that produced FA/GA. The team's maximum efficiency and the total length of contribution are the most important predictors. This article contributes to the literature on virtual team organization

    Same but Different: Distant Supervision for Predicting and Understanding Entity Linking Difficulty

    Full text link
    Entity Linking (EL) is the task of automatically identifying entity mentions in a piece of text and resolving them to a corresponding entity in a reference knowledge base like Wikipedia. There is a large number of EL tools available for different types of documents and domains, yet EL remains a challenging task where the lack of precision on particularly ambiguous mentions often spoils the usefulness of automated disambiguation results in real applications. A priori approximations of the difficulty to link a particular entity mention can facilitate flagging of critical cases as part of semi-automated EL systems, while detecting latent factors that affect the EL performance, like corpus-specific features, can provide insights on how to improve a system based on the special characteristics of the underlying corpus. In this paper, we first introduce a consensus-based method to generate difficulty labels for entity mentions on arbitrary corpora. The difficulty labels are then exploited as training data for a supervised classification task able to predict the EL difficulty of entity mentions using a variety of features. Experiments over a corpus of news articles show that EL difficulty can be estimated with high accuracy, revealing also latent features that affect EL performance. Finally, evaluation results demonstrate the effectiveness of the proposed method to inform semi-automated EL pipelines.Comment: Preprint of paper accepted for publication in the 34th ACM/SIGAPP Symposium On Applied Computing (SAC 2019

    Results from the centers for disease control and prevention's predict the 2013-2014 Influenza Season Challenge

    Get PDF
    Background: Early insights into the timing of the start, peak, and intensity of the influenza season could be useful in planning influenza prevention and control activities. To encourage development and innovation in influenza forecasting, the Centers for Disease Control and Prevention (CDC) organized a challenge to predict the 2013-14 Unites States influenza season. Methods: Challenge contestants were asked to forecast the start, peak, and intensity of the 2013-2014 influenza season at the national level and at any or all Health and Human Services (HHS) region level(s). The challenge ran from December 1, 2013-March 27, 2014; contestants were required to submit 9 biweekly forecasts at the national level to be eligible. The selection of the winner was based on expert evaluation of the methodology used to make the prediction and the accuracy of the prediction as judged against the U.S. Outpatient Influenza-like Illness Surveillance Network (ILINet). Results: Nine teams submitted 13 forecasts for all required milestones. The first forecast was due on December 2, 2013; 3/13 forecasts received correctly predicted the start of the influenza season within one week, 1/13 predicted the peak within 1 week, 3/13 predicted the peak ILINet percentage within 1 %, and 4/13 predicted the season duration within 1 week. For the prediction due on December 19, 2013, the number of forecasts that correctly forecasted the peak week increased to 2/13, the peak percentage to 6/13, and the duration of the season to 6/13. As the season progressed, the forecasts became more stable and were closer to the season milestones. Conclusion: Forecasting has become technically feasible, but further efforts are needed to improve forecast accuracy so that policy makers can reliably use these predictions. CDC and challenge contestants plan to build upon the methods developed during this contest to improve the accuracy of influenza forecasts. © 2016 The Author(s)

    Social Interactions vs Revisions, What is important for Promotion in Wikipedia?

    Full text link
    In epistemic community, people are said to be selected on their knowledge contribution to the project (articles, codes, etc.) However, the socialization process is an important factor for inclusion, sustainability as a contributor, and promotion. Finally, what does matter to be promoted? being a good contributor? being a good animator? knowing the boss? We explore this question looking at the process of election for administrator in the English Wikipedia community. We modeled the candidates according to their revisions and/or social attributes. These attributes are used to construct a predictive model of promotion success, based on the candidates's past behavior, computed thanks to a random forest algorithm. Our model combining knowledge contribution variables and social networking variables successfully explain 78% of the results which is better than the former models. It also helps to refine the criterion for election. If the number of knowledge contributions is the most important element, social interactions come close second to explain the election. But being connected with the future peers (the admins) can make the difference between success and failure, making this epistemic community a very social community too

    How Stable is Knowledge Base Knowledge?

    Full text link
    Knowledge Bases (KBs) provide structured representation of the real-world in the form of extensive collections of facts about real-world entities, their properties and relationships. They are ubiquitous in large-scale intelligent systems that exploit structured information such as in tasks like structured search, question answering and reasoning, and hence their data quality becomes paramount. The inevitability of change in the real-world, brings us to a central property of KBs -- they are highly dynamic in that the information they contain are constantly subject to change. In other words, KBs are unstable. In this paper, we investigate the notion of KB stability, specifically, the problem of KBs changing due to real-world change. Some entity-property-pairs do not undergo change in reality anymore (e.g., Einstein-children or Tesla-founders), while others might well change in the future (e.g., Tesla-board member or Ronaldo-occupation as of 2022). This notion of real-world grounded change is different from other changes that affect the data only, notably correction and delayed insertion, which have received attention in data cleaning, vandalism detection, and completeness estimation already. To analyze KB stability, we proceed in three steps. (1) We present heuristics to delineate changes due to world evolution from delayed completions and corrections, and use these to study the real-world evolution behaviour of diverse Wikidata domains, finding a high skew in terms of properties. (2) We evaluate heuristics to identify entities and properties likely to not change due to real-world change, and filter inherently stable entities and properties. (3) We evaluate the possibility of predicting stability post-hoc, specifically predicting change in a property of an entity, finding that this is possible with up to 83% F1 score, on a balanced binary stability prediction task.Comment: Incomplete draft. 12 page

    Tour recommendation for groups

    Get PDF
    Consider a group of people who are visiting a major touristic city, such as NY, Paris, or Rome. It is reasonable to assume that each member of the group has his or her own interests or preferences about places to visit, which in general may differ from those of other members. Still, people almost always want to hang out together and so the following question naturally arises: What is the best tour that the group could perform together in the city? This problem underpins several challenges, ranging from understanding people’s expected attitudes towards potential points of interest, to modeling and providing good and viable solutions. Formulating this problem is challenging because of multiple competing objectives. For example, making the entire group as happy as possible in general conflicts with the objective that no member becomes disappointed. In this paper, we address the algorithmic implications of the above problem, by providing various formulations that take into account the overall group as well as the individual satisfaction and the length of the tour. We then study the computational complexity of these formulations, we provide effective and efficient practical algorithms, and, finally, we evaluate them on datasets constructed from real city data
    • …
    corecore