369 research outputs found

    GENERATING KNOWLEDGE STRUCTURES FROM OPEN DATASETS' TAGS - AN APPROACH BASED ON FORMAL CONCEPT ANALYSIS

    Get PDF
    Under influence of data transparency initiatives, a variety of institutions have published a significant number of datasets. In most cases, data publishers take advantage of open data portals (ODPs) for making their datasets publicly available. To improve the datasets' discoverability, open data portals (ODPs) group open datasets into categories using various criteria like publishers, institutions, formats, and descriptions. For these purposes, portals take advantage of metadata accompanying datasets. However, a part of metadata may be missing, or may be incomplete or redundant. Each of these situations makes it difficult for users to find appropriate datasets and obtain the desired information. As the number of available datasets grows, this problem becomes easy to notice. This paper is focused on the first step towards decreasing this problem by implementing knowledge structures to be used in situations where a part of datasets' metadata is missing. In particular, we focus on developing knowledge structures capable of suggesting the best match for the category where an uncategorized dataset should belong to. Our approach relies on dataset descriptions provided by users within dataset tags. We take advantage of a formal concept analysis to reveal the shared conceptualization originating from the tags' usage by developing a concept lattice per each category of open datasets. Since tags represent free text metadata entered by users, in this paper we will present a method of optimizing their usage through means of semantic similarity measures based on natural language processing mechanisms. Finally, we will demonstrate the advantage of our proposal by comparing concept lattices generated using formal the concept analysis before and after the optimization process. The main experimental research results will show that our approach is capable of reducing the number of nodes within a lattice more than 40%

    Digital literacy in six micro-bites

    Get PDF
    "'Digital Literacy in Six Micro-Bites' is a series of six essays disseminated as part of “L’arca in the loop,” ARCA’s monthly newsletter, from December 2019 to May 2020. The series is also published in posts on ARCA’s website. Each of these six posts focuses on a key concept in digital culture, providing definitions and outlining technical considerations in plain language, along with insights on how digital technologies might support artist-run centre personnel in their day-to-day activities." -- page 3

    PHA4GE quality control contextual data tags:standardized annotations for sharing public health sequence datasets with known quality issues to facilitate testing and training

    Get PDF
    As public health laboratories expand their genomic sequencing and bioinformatics capacity for the surveillance of different pathogens, labs must carry out robust validation, training, and optimization of wet- and dry-lab procedures. Achieving these goals for algorithms, pipelines and instruments often requires that lower quality datasets be made available for analysis and comparison alongside those of higher quality. This range of data quality in reference sets can complicate the sharing of sub-optimal datasets that are vital for the community and for the reproducibility of assays. Sharing of useful, but sub-optimal datasets requires careful annotation and documentation of known issues to enable appropriate interpretation, avoid being mistaken for better quality information, and for these data (and their derivatives) to be easily identifiable in repositories. Unfortunately, there are currently no standardized attributes or mechanisms for tagging poor-quality datasets, or datasets generated for a specific purpose, to maximize their utility, searchability, accessibility and reuse. The Public Health Alliance for Genomic Epidemiology (PHA4GE) is an international community of scientists from public health, industry and academia focused on improving the reproducibility, interoperability, portability, and openness of public health bioinformatic software, skills, tools and data. To address the challenges of sharing lower quality datasets, PHA4GE has developed a set of standardized contextual data tags, namely fields and terms, that can be included in public repository submissions as a means of flagging pathogen sequence data with known quality issues, increasing their discoverability. The contextual data tags were developed through consultations with the community including input from the International Nucleotide Sequence Data Collaboration (INSDC), and have been standardized using ontologies - community-based resources for defining the tag properties and the relationships between them. The standardized tags are agnostic to the organism and the sequencing technique used and thus can be applied to data generated from any pathogen using an array of sequencing techniques. The tags can also be applied to synthetic (lab created) data. The list of standardized tags is maintained by PHA4GE and can be found at https://github.com/pha4ge/contextual_data_QC_tags. Definitions, ontology IDs, examples of use, as well as a JSON representation, are provided. The PHA4GE QC tags were tested, and are now implemented, by the FDA's GenomeTrakr laboratory network as part of its routine submission process for SARS-CoV-2 wastewater surveillance. We hope that these simple, standardized tags will help improve communication regarding quality control in public repositories, in addition to making datasets of variable quality more easily identifiable. Suggestions for additional tags can be submitted to PHA4GE via the New Term Request Form in the GitHub repository. By providing a mechanism for feedback and suggestions, we also expect that the tags will evolve with the needs of the community.</p

    Recipe popularity prediction in Finnish social media by machine learning models

    Get PDF
    Abstract. In recent times, the internet has emerged as a primary source of cooking inspiration, eating experiences and food social gathering with a majority of individuals turning to online recipes, surpassing the usage of traditional cookbooks. However, there is a growing concern about the healthiness of online recipes. This thesis focuses on unraveling the determinants of online recipe popularity by analyzing a dataset comprising more than 5000 recipes from Valio, one of Finland’s leading corporations. Valio’s website serves as a representation of diverse cooking preferences among users in Finland. Through examination of recipe attributes such as nutritional content (energy, fat, salt, etc.), food preparation complexity (cooking time, number of steps, required ingredients, etc.), and user engagement (the number of comments, ratings, sentiment of comments, etc.), we aim to pinpoint the critical elements influencing the popularity of online recipes. Our predictive model-Logistic Regression (classification accuracy and F1 score are 0.93 and 0.9 respectively)- substantiates the existence of pertinent recipe characteristics that significantly influence their rates. The dataset we employ is notably influenced by user engagement features, particularly the number of received ratings and comments. In other words, recipes that garner more attention in terms of comments and ratings tend to have higher rates values (i.e., more popular). Additionally, our findings reveal that a substantial portion of Valio’s recipes falls within the medium health Food Standards Agency (FSA) score range, and intriguingly, recipes deemed less healthy tend to receive higher average ratings from users. This study advances our comprehension of the factors contributing to the popularity of online recipes, providing valuable insights into contemporary cooking preferences in Finland as well as guiding future dietary policy shift.Reseptin suosion ennustaminen suomalaisessa sosiaalisessa mediassa koneoppimismalleilla. Tiivistelmä. Internet on viime aikoina noussut ensisijaiseksi inspiraation lähteeksi ruoanlaitossa, ja suurin osa ihmisistä on siirtynyt käyttämään verkkoreseptejä perinteisten keittokirjojen sijaan. Huoli verkkoreseptien terveellisyydestä on kuitenkin kasvava. Tämä opinnäytetyö keskittyy verkkoreseptien suosioon vaikuttavien tekijöiden selvittämiseen analysoimalla yli 5000 reseptistä koostuvaa aineistoa Suomen johtavalta maitotuoteyritykseltä, Valiolta. Valion verkkosivujen reseptit edustavat monipuolisesti suomalaisten käyttäjien ruoanlaittotottumuksia. Tarkastelemalla reseptin ominaisuuksia, kuten ravintoarvoa (energia, rasva, suola, jne.), valmistuksen monimutkaisuutta (keittoaika, vaiheiden määrä, tarvittavat ainesosat, jne.) ja käyttäjien sitoutumista (kommenttien määrä, arviot, kommenttien mieliala, jne.), pyrimme paikantamaan kriittiset tekijät, jotka vaikuttavat verkkoreseptien suosioon. Ennustava mallimme — Logistic Regression (luokituksen tarkkuus 0,93 ja F1-pisteet 0,9 ) — osoitti merkitsevien reseptiominaisuuksien olemassaolon. Ne vaikuttivat merkittävästi reseptien suosioon. Käyttämiimme tietojoukkoihin vaikuttivat merkittävästi käyttäjien sitoutumisominaisuudet, erityisesti vastaanotettujen arvioiden ja kommenttien määrä. Toisin sanoen reseptit, jotka saivat enemmän huomiota kommenteissa ja arvioissa, olivat yleensä suositumpia. Lisäksi selvisi, että huomattava osa Valion resepteistä kuuluu keskitason terveyspisteiden alueelle (arvioituna FSA Scorella), ja mielenkiintoisesti, vähemmän terveellisiksi katsotut reseptit saavat käyttäjiltä yleensä korkeamman keskiarvon. Tämä tutkimus edistää ymmärrystämme verkkoreseptien suosioon vaikuttavista tekijöistä ja tarjoaa arvokasta näkemystä nykypäivän ruoanlaittotottumuksista Suomessa

    Archives, Access and Artificial Intelligence: Working with Born-Digital and Digitized Archival Collections

    Get PDF
    Digital archives are transforming the Humanities and the Sciences. Digitized collections of newspapers and books have pushed scholars to develop new, data-rich methods. Born-digital archives are now better preserved and managed thanks to the development of open-access and commercial software. Digital Humanities have moved from the fringe to the center of academia. Yet, the path from the appraisal of records to their analysis is far from smooth. This book explores crossovers between various disciplines to improve the discoverability, accessibility, and use of born-digital archives and other cultural assets

    Archives, Access and Artificial Intelligence

    Get PDF
    Digital archives are transforming the Humanities and the Sciences. Digitized collections of newspapers and books have pushed scholars to develop new, data-rich methods. Born-digital archives are now better preserved and managed thanks to the development of open-access and commercial software. Digital Humanities have moved from the fringe to the center of academia. Yet, the path from the appraisal of records to their analysis is far from smooth. This book explores crossovers between various disciplines to improve the discoverability, accessibility, and use of born-digital archives and other cultural assets

    Archives, Access and Artificial Intelligence

    Get PDF
    Digital archives are transforming the Humanities and the Sciences. Digitized collections of newspapers and books have pushed scholars to develop new, data-rich methods. Born-digital archives are now better preserved and managed thanks to the development of open-access and commercial software. Digital Humanities have moved from the fringe to the center of academia. Yet, the path from the appraisal of records to their analysis is far from smooth. This book explores crossovers between various disciplines to improve the discoverability, accessibility, and use of born-digital archives and other cultural assets

    Accelerating science with human-aware artificial intelligence

    Full text link
    Artificial intelligence (AI) models trained on published scientific findings have been used to invent valuable materials and targeted therapies, but they typically ignore the human scientists who continually alter the landscape of discovery. Here we show that incorporating the distribution of human expertise by training unsupervised models on simulated inferences cognitively accessible to experts dramatically improves (up to 400%) AI prediction of future discoveries beyond those focused on research content alone, especially when relevant literature is sparse. These models succeed by predicting human predictions and the scientists who will make them. By tuning human-aware AI to avoid the crowd, we can generate scientifically promising "alien" hypotheses unlikely to be imagined or pursued without intervention until the distant future, which hold promise to punctuate scientific advance beyond questions currently pursued. Accelerating human discovery or probing its blind spots, human-aware AI enables us to move toward and beyond the contemporary scientific frontier
    corecore