56 research outputs found

    The Extended Dawid-Skene Model:Fusing Information from Multiple Data Schemas

    Get PDF
    While label fusion from multiple noisy annotations is a well understood concept in data wrangling (tackled for example by the Dawid-Skene (DS) model), we consider the extended problem of carrying out learning when the labels themselves are not consistently annotated with the same schema. We show that even if annotators use disparate, albeit related, label-sets, we can still draw inferences for the underlying full label-set. We propose the Inter-Schema AdapteR (ISAR) to translate the fully-specified label-set to the one used by each annotator, enabling learning under such heterogeneous schemas, without the need to re-annotate the data. We apply our method to a mouse behavioural dataset, achieving significant gains (compared with DS) in out-of-sample log-likelihood (-3.40 to -2.39) and F1-score (0.785 to 0.864).Comment: Updated with Author-Preprint version following Publication in P. Cellier and K. Driessens (Eds.): ECML PKDD 2019 Workshops, CCIS 1167, pp. 121 - 136, 202

    Garbage In, Garbage Out? Do Machine Learning Application Papers in Social Computing Report Where Human-Labeled Training Data Comes From?

    Full text link
    Many machine learning projects for new application areas involve teams of humans who label data for a particular purpose, from hiring crowdworkers to the paper's authors labeling the data themselves. Such a task is quite similar to (or a form of) structured content analysis, which is a longstanding methodology in the social sciences and humanities, with many established best practices. In this paper, we investigate to what extent a sample of machine learning application papers in social computing --- specifically papers from ArXiv and traditional publications performing an ML classification task on Twitter data --- give specific details about whether such best practices were followed. Our team conducted multiple rounds of structured content analysis of each paper, making determinations such as: Does the paper report who the labelers were, what their qualifications were, whether they independently labeled the same items, whether inter-rater reliability metrics were disclosed, what level of training and/or instructions were given to labelers, whether compensation for crowdworkers is disclosed, and if the training data is publicly available. We find a wide divergence in whether such practices were followed and documented. Much of machine learning research and education focuses on what is done once a "gold standard" of training data is available, but we discuss issues around the equally-important aspect of whether such data is reliable in the first place.Comment: 18 pages, includes appendi

    Minimizing efforts in validating crowd answers

    Get PDF
    In recent years, crowdsourcing has become essential in a wide range of Web applications. One of the biggest challenges of crowdsourcing is the quality of crowd answers as workers have wide-ranging levels of expertise and the worker community may contain faulty workers. Although various techniques for quality control have been proposed, a post-processing phase in which crowd answers are validated is still required. Validation is typically conducted by experts, whose availability is limited and who incur high costs. Therefore, we develop a probabilistic model that helps to identify the most beneficial validation questions in terms of both, improvement of result correctness and detection of faulty workers. Our approach allows us to guide the experts work by collecting input on the most problematic cases, thereby achieving a set of high quality answers even if the expert does not validate the complete answer set. Our comprehensive evaluation using both real-world and synthetic datasets demonstrates that our techniques save up to 50% of expert efforts compared to baseline methods when striving for perfect result correctness. In absolute terms, for most cases, we achieve close to perfect correctness after expert input has been sought for only 20% of the questions

    Global access to surgical care: a modelling study

    Get PDF
    Background More than 2 billion people are unable to receive surgical care based on operating theatre density alone. The vision of the Lancet Commission on Global Surgery is universal access to safe, aff ordable surgical and anaesthesia care when needed. We aimed to estimate the number of individuals worldwide without access to surgical services as defi ned by the Commission’s vision. Methods We modelled access to surgical services in 196 countries with respect to four dimensions: timeliness, surgical capacity, safety, and aff ordability. We built a chance tree for each country to model the probability of surgical access with respect to each dimension, and from this we constructed a statistical model to estimate the proportion of the population in each country that does not have access to surgical services. We accounted for uncertainty with oneway sensitivity analyses, multiple imputation for missing data, and probabilistic sensitivity analysis. Findings At least 4·8 billion people (95% posterior credible interval 4·6–5·0 [67%, 64–70]) of the world’s population do not have access to surgery. The proportion of the population without access varied widely when stratifi ed by epidemiological region: greater than 95% of the population in south Asia and central, eastern, and western sub- Saharan Africa do not have access to care, whereas less than 5% of the population in Australasia, high-income North America, and western Europe lack access. Interpretation Most of the world’s population does not have access to surgical care, and access is inequitably distributed. The near absence of access in many low-income and middle-income countries represents a crisis, and as the global health community continues to support the advancement of universal health coverage, increasing access to surgical services will play a central role in ensuring health care for all

    Finding the “Dark Matter” in Human and Yeast Protein Network Prediction and Modelling

    Get PDF
    Accurate modelling of biological systems requires a deeper and more complete knowledge about the molecular components and their functional associations than we currently have. Traditionally, new knowledge on protein associations generated by experiments has played a central role in systems modelling, in contrast to generally less trusted bio-computational predictions. However, we will not achieve realistic modelling of complex molecular systems if the current experimental designs lead to biased screenings of real protein networks and leave large, functionally important areas poorly characterised. To assess the likelihood of this, we have built comprehensive network models of the yeast and human proteomes by using a meta-statistical integration of diverse computationally predicted protein association datasets. We have compared these predicted networks against combined experimental datasets from seven biological resources at different level of statistical significance. These eukaryotic predicted networks resemble all the topological and noise features of the experimentally inferred networks in both species, and we also show that this observation is not due to random behaviour. In addition, the topology of the predicted networks contains information on true protein associations, beyond the constitutive first order binary predictions. We also observe that most of the reliable predicted protein associations are experimentally uncharacterised in our models, constituting the hidden or “dark matter” of networks by analogy to astronomical systems. Some of this dark matter shows enrichment of particular functions and contains key functional elements of protein networks, such as hubs associated with important functional areas like the regulation of Ras protein signal transduction in human cells. Thus, characterising this large and functionally important dark matter, elusive to established experimental designs, may be crucial for modelling biological systems. In any case, these predictions provide a valuable guide to these experimentally elusive regions
    corecore