8,620 research outputs found

    Open Datasets for Evaluating the Interpretation of Bibliographic Records

    Get PDF
    International audienceThe transformation of legacy MARC catalogs to FRBR catalogs (FRBRization) is a complex and imp ortant challenge for libraries. Although many FRBRization to ols have provided exp erimental validation, it is difficult to evaluate and compare these systems on a fair basis due to a lack of common datasets. This p oster presents two public datasets (T42 and BIB-RCAT) intended to supp ort the validation of the FRBRization pro cess

    Software tools for conducting bibliometric analysis in science: An up-to-date review

    Get PDF
    Bibliometrics has become an essential tool for assessing and analyzing the output of scientists, cooperation between universities, the effect of state-owned science funding on national research and development performance and educational efficiency, among other applications. Therefore, professionals and scientists need a range of theoretical and practical tools to measure experimental data. This review aims to provide an up-to-date review of the various tools available for conducting bibliometric and scientometric analyses, including the sources of data acquisition, performance analysis and visualization tools. The included tools were divided into three categories: general bibliometric and performance analysis, science mapping analysis, and libraries; a description of all of them is provided. A comparative analysis of the database sources support, pre-processing capabilities, analysis and visualization options were also provided in order to facilitate its understanding. Although there are numerous bibliometric databases to obtain data for bibliometric and scientometric analysis, they have been developed for a different purpose. The number of exportable records is between 500 and 50,000 and the coverage of the different science fields is unequal in each database. Concerning the analyzed tools, Bibliometrix contains the more extensive set of techniques and suitable for practitioners through Biblioshiny. VOSviewer has a fantastic visualization and is capable of loading and exporting information from many sources. SciMAT is the tool with a powerful pre-processing and export capability. In views of the variability of features, the users need to decide the desired analysis output and chose the option that better fits into their aims

    Applying FRBR model to bibliographic works on Al-Quran

    Get PDF
    This study explores the feasibility of applying the object-oriented Functional Requirements for Bibliographic Records (FRBR) model to MARC-based bibliographic records on the Al-Quran. Based on the content analysis of 127 MARC-based bibliographic records on Al-Quran from the International Islamic University Malaysia (IIUM) OPAC system, this paper reports on the process of mapping FRBR entities to set of works on Al-Quran. The attributes of the bibliographic works in the MARC records were identified and grouped according to the FRBR entities. The findings suggest that, overall, most of the MARC-based bibliographic records on Al-Quran were sufficient to represent the FRBR model. However, several issues were identified as affecting the process of creating entity-relationship model for “FRBRizing” bibliographic works on Al-Quran. These include inconsistencies in romanizing records in Arabic scripts, difficulties in identifying complex works, missing fields for subject headings, and missing fields for record-object relationship identification. Thus, a major conclusion drawn is that the quality of MARC records is an important aspect in ensuring the bibliographic records are having complete, correct, and reliable data for FRBRization process

    From the web of bibliographic data to the web of bibliographic meaning: structuring, interlinking and validating ontologies on the semantic web

    Get PDF
    Bibliographic data sets have revealed good levels of technical interoperability observing the principles and good practices of linked data. However, they have a low level of quality from the semantic point of view, due to many factors: lack of a common conceptual framework for a diversity of standards often used together, reduced number of links between the ontologies underlying data sets, proliferation of heterogeneous vocabularies, underuse of semantic mechanisms in data structures, "ontology hijacking" (Feeney et al., 2018), point-to-point mappings, as well as limitations of semantic web languages for the requirements of bibliographic data interoperability. After reviewing such issues, a research direction is proposed to overcome the misalignments found by means of a reference model and a superontology, using Shapes Constraint Language (SHACL) to solve current limitations of RDF languages.info:eu-repo/semantics/acceptedVersio

    Is there cross-fertilization in macroeconomics? A quantitative exploration of the interactions between DSGE and macro agent-based models

    Get PDF
    This paper compares Dynamic Stochastic General Equilibrium (DSGE) and Macro Agent-Based Models(MABMs) by adopting mainly a distant reading perspective. A set of 2,299 papers is retrieved from Scopus byusing keywords related to MABM and DSGE domains. The interactions between the two streams of DSGE andMABM literature are explored by considering a social axis (co-authorship network), and an intellectual axis (citedreferences and bibliographic coupling). The analysis gave results that are neither consistent with a unitarystructure of macroeconomics, nor with a simple dichotomic structure of alternative paradigms and separatedacademics communities. Indeed, the co-authorship network shows that DSGE and MABM form fragmentedcommunities still belonging to two different larger MABM and DSGE communities rather neatly separated.Collaboration insists mainly inside the smaller groups and inside each of the two larger DSGE and MABMcommunities. Moreover, the co-authorship network analysis does not show evidence of systematic collaborationbetween MABM and DSGE authors. From an intellectual point of view, data show that DSGE and MABM articlesrefer to two different sets of bibliographic references. When a measure of paper-similarity is adopted, it appearsthat DSGE literature is fragmented in 4 groups while the MABM articles are clustered together in a unique group.Hence, DSGE approach is less monolithic than at the time of the New Synthesis: indeed, a large and a growingliterature has developed at the margins of the core DSGE approach which includes elements of heterogeneousagent modelling, social interactions, experiments, expectations formation, learning etc. The analysis gave noevidence of cross-fertilization between DSGE and MABM literature whilst it rather suggests a totallydissymmetric influence of DSGE over MABM literature, i.e., only MABM modelers look at DSGE but not vice-versa. The paper questions the capacity of the current dominant approach to benefit from cross-fertilization

    Measuring metadata quality

    Get PDF

    Readers and Reading in the First World War

    Get PDF
    This essay consists of three individually authored and interlinked sections. In ‘A Digital Humanities Approach’, Francesca Benatti looks at datasets and databases (including the UK Reading Experience Database) and shows how a systematic, macro-analytical use of digital humanities tools and resources might yield answers to some key questions about reading in the First World War. In ‘Reading behind the Wire in the First World War’ Edmund G. C. King scrutinizes the reading practices and preferences of Allied prisoners of war in Mainz, showing that reading circumscribed by the contingencies of a prison camp created an unique literary community, whose legacy can be traced through their literary output after the war. In ‘Book-hunger in Salonika’, Shafquat Towheed examines the record of a single reader in a specific and fairly static frontline, and argues that in the case of the Salonika campaign, reading communities emerged in close proximity to existing centres of print culture. The focus of this essay moves from the general to the particular, from the scoping of large datasets, to the analyses of identified readers within a specific geographical and temporal space. The authors engage with the wider issues and problems of recovering, interpreting, visualizing, narrating, and representing readers in the First World War

    BlogForever: D3.1 Preservation Strategy Report

    Get PDF
    This report describes preservation planning approaches and strategies recommended by the BlogForever project as a core component of a weblog repository design. More specifically, we start by discussing why we would want to preserve weblogs in the first place and what it is exactly that we are trying to preserve. We further present a review of past and present work and highlight why current practices in web archiving do not address the needs of weblog preservation adequately. We make three distinctive contributions in this volume: a) we propose transferable practical workflows for applying a combination of established metadata and repository standards in developing a weblog repository, b) we provide an automated approach to identifying significant properties of weblog content that uses the notion of communities and how this affects previous strategies, c) we propose a sustainability plan that draws upon community knowledge through innovative repository design
    corecore