30 research outputs found

    Gender equality and girls education: Investigating frameworks, disjunctures and meanings of quality education

    Get PDF
    The article draws on qualitative educational research across a diversity of low-income countries to examine the gendered inequalities in education as complex, multi-faceted and situated rather than a series of barriers to be overcome through linear input–output processes focused on isolated dimensions of quality. It argues that frameworks for thinking about educational quality often result in analyses of gender inequalities that are fragmented and incomplete. However, by considering education quality more broadly as a terrain of quality it investigates questions of educational transitions, teacher supply and community participation, and develops understandings of how education is experienced by learners and teachers in their gendered lives and their teaching practices. By taking an approach based on theories of human development the article identifies dynamics of power underpinning gender inequalities in the literature and played out in diverse contexts and influenced by social, cultural and historical contexts. The review and discussion indicate that attaining gender equitable quality education requires recognition and understanding of the ways in which inequalities intersect and interrelate in order to seek out multi-faceted strategies that address not only different dimensions of girls’ and women’s lives, but understand gendered relationships and structurally entrenched inequalities between women and men, girls and boys

    Temporal and spatial instability in neutral and adaptive (MHC) genetic variation in marginal salmon populations

    Get PDF
    The role of marginal populations for the long-term maintenance of species’ genetic diversity and evolutionary potential is particularly timely in view of the range shifts caused by climate change. The Centre-Periphery hypothesis predicts that marginal populations should bear reduced genetic diversity and have low evolutionary potential. We analysed temporal stability at neutral microsatellite and adaptive MHC genetic variation over five decades in four marginal Atlantic salmon populations located at the southern limit of the species’ distribution with a complicated demographic history, which includes stocking with foreign and native salmon for at least 2 decades. We found a temporal increase in neutral genetic variation, as well as temporal instability in population structuring, highlighting the importance of temporal analyses in studies that examine the genetic diversity of peripheral populations at the margins of the species’ range, particularly in face of climate change

    Scaling up data curation using deep learning: An application to literature triage in genomic variation resources.

    Get PDF
    Manually curating biomedical knowledge from publications is necessary to build a knowledge based service that provides highly precise and organized information to users. The process of retrieving relevant publications for curation, which is also known as document triage, is usually carried out by querying and reading articles in PubMed. However, this query-based method often obtains unsatisfactory precision and recall on the retrieved results, and it is difficult to manually generate optimal queries. To address this, we propose a machine-learning assisted triage method. We collect previously curated publications from two databases UniProtKB/Swiss-Prot and the NHGRI-EBI GWAS Catalog, and used them as a gold-standard dataset for training deep learning models based on convolutional neural networks. We then use the trained models to classify and rank new publications for curation. For evaluation, we apply our method to the real-world manual curation process of UniProtKB/Swiss-Prot and the GWAS Catalog. We demonstrate that our machine-assisted triage method outperforms the current query-based triage methods, improves efficiency, and enriches curated content. Our method achieves a precision 1.81 and 2.99 times higher than that obtained by the current query-based triage methods of UniProtKB/Swiss-Prot and the GWAS Catalog, respectively, without compromising recall. In fact, our method retrieves many additional relevant publications that the query-based method of UniProtKB/Swiss-Prot could not find. As these results show, our machine learning-based method can make the triage process more efficient and is being implemented in production so that human curators can focus on more challenging tasks to improve the quality of knowledge bases

    An expanded evaluation of protein function prediction methods shows an improvement in accuracy

    Get PDF
    Background: A major bottleneck in our understanding of the molecular underpinnings of life is the assignment of function to proteins. While molecular experiments provide the most reliable annotation of proteins, their relatively low throughput and restricted purview have led to an increasing role for computational function prediction. However, assessing methods for protein function prediction and tracking progress in the field remain challenging.Results: We conducted the second critical assessment of functional annotation (CAFA), a timed challenge to assess computational methods that automatically assign protein function. We evaluated 126 methods from 56 research groups for their ability to predict biological functions using Gene Ontology and gene-disease associations using Human Phenotype Ontology on a set of 3681 proteins from 18 species. CAFA2 featured expanded analysis compared with CAFA1, with regards to data set size, variety, and assessment metrics. To review progress in the field, the analysis compared the best methods from CAFA1 to those of CAFA2.Conclusions: The top-performing methods in CAFA2 outperformed those from CAFA1. This increased accuracy can be attributed to a combination of the growing number of experimental annotations and improved methods for function prediction. The assessment also revealed that the definition of top-performing algorithms is ontology specific, that different performance metrics can be used to probe the nature of accurate predictions, and the relative diversity of predictions in the biological process and human phenotype ontologies. While there was methodological improvement between CAFA1 and CAFA2, the interpretation of results and usefulness of individual methods remain context-dependent

    Describing the antimicrobial usage patterns of companion animal veterinary practices; free text analysis of more than 4.4 million consultation records.

    Get PDF
    Antimicrobial Resistance is a global crisis that veterinarians contribute to through their use of antimicrobials in animals. Antimicrobial stewardship has been shown to be an effective means to reduce antimicrobial resistance in hospital environments. Effective monitoring of antimicrobial usage patterns is an essential part of antimicrobial stewardship and is critical in reducing the development of antimicrobial resistance. The aim of this study is to describe how frequently antimicrobials were used in veterinary consultations and identify the most frequently used antimicrobials. Using VetCompass Australia, Natural Language Processing techniques, and the Australian Strategic Technical Advisory Group's (ASTAG) Rating system to classify the importance of antimicrobials, descriptive analysis was performed on the antimicrobials prescribed in consultations from 137 companion animal veterinary clinics in Australia between 2013 and 2017 (inclusive). Of the 4,400,519 consultations downloaded there were 595,089 consultations where antimicrobials were prescribed to dogs or cats. Antimicrobials were dispensed in 145 of every 1000 canine consultations; and 38 per 1000 consultations involved high importance rated antimicrobials. Similarly with cats, 108 per 1000 consultations had antimicrobials dispensed, and in 47 per 1000 consultations an antimicrobial of high importance rating was administered. The most common antimicrobials given to cats and dogs were cefovecin and amoxycillin clavulanate, respectively. The most common topical antimicrobial and high-rated topical antimicrobial given to dogs and cats was polymyxin B. This study provides a descriptive analysis of the antimicrobial usage patterns in Australia using methods that can be automated to inform antimicrobial use surveillance programs and promote antimicrobial stewardship

    Representing annotation compositionality and provenance for the Semantic Web

    Get PDF
    BACKGROUND: Though the annotation of digital artifacts with metadata has a long history, the bulk of that work focuses on the association of single terms or concepts to single targets. As annotation efforts expand to capture more complex information, annotations will need to be able to refer to knowledge structures formally defined in terms of more atomic knowledge structures. Existing provenance efforts in the Semantic Web domain primarily focus on tracking provenance at the level of whole triples and do not provide enough detail to track how individual triple elements of annotations were derived from triple elements of other annotations. RESULTS: We present a task- and domain-independent ontological model for capturing annotations and their linkage to their denoted knowledge representations, which can be singular concepts or more complex sets of assertions. We have implemented this model as an extension of the Information Artifact Ontology in OWL and made it freely available, and we show how it can be integrated with several prominent annotation and provenance models. We present several application areas for the model, ranging from linguistic annotation of text to the annotation of disease-associations in genome sequences. CONCLUSIONS: With this model, progressively more complex annotations can be composed from other annotations, and the provenance of compositional annotations can be represented at the annotation level or at the level of individual elements of the RDF triples composing the annotations. This in turn allows for progressively richer annotations to be constructed from previous annotation efforts, the precise provenance recording of which facilitates evidence-based inference and error tracking

    Establishing a baseline for literature mining human genetic variants and their relationships to disease cohorts

    Get PDF
    BACKGROUND: The Variome corpus, a small collection of published articles about inherited colorectal cancer, includes annotations of 11 entity types and 13 relation types related to the curation of the relationship between genetic variation and disease. Due to the richness of these annotations, the corpus provides a good testbed for evaluation of biomedical literature information extraction systems. METHODS: In this paper, we focus on assessing performance on extracting the relations in the corpus, using gold standard entities as a starting point, to establish a baseline for extraction of relations important for extraction of genetic variant information from the literature. We test the application of the Public Knowledge Discovery Engine for Java (PKDE4J) system, a natural language processing system designed for information extraction of entities and relations in text, on the relation extraction task using this corpus. RESULTS: For the relations which are attested at least 100 times in the Variome corpus, we realise a performance ranging from 0.78-0.84 Precision-weighted F-score, depending on the relation. We find that the PKDE4J system adapted straightforwardly to the range of relation types represented in the corpus; some extensions to the original methodology were required to adapt to the multi-relational classification context. The results are competitive with state-of-the-art relation extraction performance on more heavily studied corpora, although the analysis shows that the Recall of a co-occurrence baseline outweighs the benefit of improved Precision for many relations, indicating the value of simple semantic constraints on relations. CONCLUSIONS: This work represents the first attempt to apply relation extraction methods to the Variome corpus. The results demonstrate that automated methods have good potential to structure the information expressed in the published literature related to genetic variants, connecting mutations to genes, diseases, and patient cohorts. Further development of such approaches will facilitate more efficient biocuration of genetic variant information into structured databases, leveraging the knowledge embedded in the vast publication literature

    A close look at protein function prediction evaluation protocols

    Get PDF
    BACKGROUND: The recently held Critical Assessment of Function Annotation challenge (CAFA2) required its participants to submit predictions for a large number of target proteins regardless of whether they have previous annotations or not. This is in contrast to the original CAFA challenge in which participants were asked to submit predictions for proteins with no existing annotations. The CAFA2 task is more realistic, in that it more closely mimics the accumulation of annotations over time. In this study we compare these tasks in terms of their difficulty, and determine whether cross-validation provides a good estimate of performance. RESULTS: The CAFA2 task is a combination of two subtasks: making predictions on annotated proteins and making predictions on previously unannotated proteins. In this study we analyze the performance of several function prediction methods in these two scenarios. Our results show that several methods (structured support vector machine, binary support vector machines and guilt-by-association methods) do not usually achieve the same level of accuracy on these two tasks as that achieved by cross-validation, and that predicting novel annotations for previously annotated proteins is a harder problem than predicting annotations for uncharacterized proteins. We also find that different methods have different performance characteristics in these tasks, and that cross-validation is not adequate at estimating performance and ranking methods. CONCLUSIONS: These results have implications for the design of computational experiments in the area of automated function prediction and can provide useful insight for the understanding and design of future CAFA competitions
    corecore