223 research outputs found

    Knowledge organization

    Get PDF
    Since Svenonius analyzed the research base in bibliographic control in 1990, the intervening years have seen major shifts in the focus of information organization in academic libraries. New technologies continue to reshape the nature and content of catalogs, stretch the boundaries of classification research, and provide new alternatives for the organization of information. Research studies have rigorously analyzed the structure of the Anglo- American Cataloguing Rules using entity-relationship modeling and expanded on the bibliographic and authority relationship research to develop new data models (Functional Requirements for Bibliographic Records [FRBR] and Functional Requirements and Numbering of Authority Records [FRANAR]). Applied research into the information organization process has led to the development of cataloguing tools and harvesting ap- plications for bibliographic data collection and automatic record creation. A growing international perspective focused research on multilingual subject access, transliteration problems in surrogate records, and user studies to improve Online Public Access Catalog (OPAC) displays for large retrieval sets resulting from federated searches. The need to organize local and remote electronic resources led to metadata research that developed general and domain-specific metadata schemes. Ongoing research in this area focuses on record structures and architectural models to enable interoperability among the various schemes and differing application platforms. Research in the area of subject access and classification is strong, covering areas such as vocabulary mapping, automatic facet construction and deconstruction for Web resources, development of expert systems for automatic classifica- tion, dynamically altered classificatory structures linked to domain-specific thesauri, crosscultural conceptual structures in classification, identification of semantic relationships for vocabulary mapped to classification systems, and the expanded use of traditional classification systems as switching languages in the global Web environment. Finally, descriptive research into library and information science (LIS) education and curricula for knowl- edge organization continues. All of this research is applicable to knowledge organization in academic and research libraries. This chapter examines this body of research in depth, describes the research methodologies employed, and identifies areas of lacunae in need of further research

    Implications of BIBFRAME and Linked Data for Libraries and Publishers

    Get PDF
    This article considers the current situation of transition from the machine-readable cataloging (MARC) formats to the Bibliographic Framework Initiative (BIBFRAME) data model, and the further step to organize and publish catalog information in the emerging linked data technology. The definition and development of new tools to realize the required changes are discussed and an outline provided of the steps being taken by Casalini Libri to ensure the compliance of its bibliographical production and services with the new standards and offer assistance to libraries and publishers in their implementation

    Applying FRBR model to bibliographic works on Al-Quran

    Get PDF
    This study explores the feasibility of applying the object-oriented Functional Requirements for Bibliographic Records (FRBR) model to MARC-based bibliographic records on the Al-Quran. Based on the content analysis of 127 MARC-based bibliographic records on Al-Quran from the International Islamic University Malaysia (IIUM) OPAC system, this paper reports on the process of mapping FRBR entities to set of works on Al-Quran. The attributes of the bibliographic works in the MARC records were identified and grouped according to the FRBR entities. The findings suggest that, overall, most of the MARC-based bibliographic records on Al-Quran were sufficient to represent the FRBR model. However, several issues were identified as affecting the process of creating entity-relationship model for “FRBRizing” bibliographic works on Al-Quran. These include inconsistencies in romanizing records in Arabic scripts, difficulties in identifying complex works, missing fields for subject headings, and missing fields for record-object relationship identification. Thus, a major conclusion drawn is that the quality of MARC records is an important aspect in ensuring the bibliographic records are having complete, correct, and reliable data for FRBRization process

    Review of Understanding FRBR: What It Is and How It Will Affect Our Retrieval Tools by Arleen Taylor

    Get PDF
    The article reviews the book Understanding FRBR: What It Is and How It Will Affect Our Retrieval Tools, edited by Arlene G. Taylor

    datos.bne.es and MARiMbA: an insight into Library Linked Data

    Get PDF
    Purpose – Linked data is gaining great interest in the cultural heritage domain as a new way for publishing, sharing and consuming data. The paper aims to provide a detailed method and MARiMbA a tool for publishing linked data out of library catalogues in the MARC 21 format, along with their application to the catalogue of the National Library of Spain in the datos.bne.es project. Design/methodology/approach – First, the background of the case study is introduced. Second, the method and process of its application are described. Third, each of the activities and tasks are defined and a discussion of their application to the case study is provided. Findings – The paper shows that the FRBR model can be applied to MARC 21 records following linked data best practices, librarians can successfully participate in the process of linked data generation following a systematic method, and data sources quality can be improved as a result of the process. Originality/value – The paper proposes a detailed method for publishing and linking linked data from MARC 21 records, provides practical examples, and discusses the main issues found in the application to a real case. Also, it proposes the integration of a data curation activity and the participation of librarians in the linked data generation process

    From many records to one graph: Heterogeneity conflicts in the Linked data restructuring cycle

    Get PDF
    Introduction. During the last couple of years the library community has developed a number of comprehensive metadata standardization projects inspired by the idea of linked data, such as the BIBFRAME model. Linked data is a set of best practice principles of publishing and exposing data on the Web utilizing a graph based data model powered with semantics and cross-domain relationships. In the light of traditional metadata practices of libraries the best practices of linked data imply a restructuring process from a collection of semi-structured bibliographic records to a semantic graph of unambiguously defined entities. A successful interlinking of entities in this graph to entities in external data sets requires a minimum level of semantic interoperability. Method The examination is carried out through a review of the relevant research within the field and of the essential documents that describe the key concepts. Analysis A high level examination of the concepts of the semantic Web and linked data is provided with a particular focus on the challenges they entail for libraries and their meta-data practices in the perspective of the extensive restructuring process that has already started. Conclusion We demonstrate that a set of heterogeneity conflicts, threatening the level of semantic interoperability, can be associated with various phases of this restructuring process from analysis and modelling to conversion and external interlinking. It also claims that these conflicts and their potential solutions are mutually dependent across the phases

    Evaluation of Mappings from MARC to Linked Data

    Get PDF
    The purpose of this study is to assess the quality and compatibility of library linked data (LLD) schemas in use or proposed for library resources. Linked Data (LD) has the potential to provide high quality metadata on the web with the ability to incorporate existing structured data from MARC via a mapping.  Researchers selected representative libraries such as Harvard University Library, LC BIBFRAME (Library of Congress Bibliographic Framework), OCLC (Online Computer Library Canter) WorldCat, and National Library of Spain. For LD frameworks, four resources are matched into specific categories with MARC (MAchine-Readable Cataloging) tags so that it could be retrieved in both OCLC LD and BIBFRAME with the conversion tool at bibframe.org: (1) Classic, ebook,and fiction, (2) multiple authors and part of a series, and non-fiction, (3) varying title, translation, and fiction, and (4) sub title, non-fiction. This study revealed that the choices and elements of each library made in local decisions might bring interoperability issues for LD services due to the quality metadata creation issues

    Producing Linked Open Dataset from Bibliographic Data with Integration of External Data Sources for Academic Libraries

    Get PDF
    This paper has focused on transformation of bibliographic data to linked open data (LOD) as RDF(Resource Description Framework) triple model with integration of external resources. Library & Information centres and knowledge centres deal with various types of databases like bibliographic databases, full text databases, archival databases, statistical databases, CD/DVD ROM databases and more. Presently, web technology changes storing, processing, and disseminating services rapidly. The semantic web technology is an advance technology of web platform which provides structured data on web for describing and retrieving by the organization or institutions. It may provide more information from other external resources to the users. The main objective of this paper is transformation of library bibliographic data, based on MARC21, to RDF triple format as LOD with enrichment of external LOD dataset. External resources like OpenLibrary, VIAF, Wikidata, DBpedia, GeoNames etc. We have proposed a Workflow model (Figure-1) to visualize details steps, activities, components for transforming bibliographic data to LOD dataset. The methodology of this work includes the various methods and steps for conducting such research work. Here we have used an open source tool OpenRefine (version 3.2), formally it is known as GoogleRefine. The OpenRefine tool is used for managing and organizing the messy data with different attribute like row-column manipulation, reconciliation manipulation, different format manipulation like XML, JSON, N-Triple, RDF etc. The OpenRefine tool has played the various roles for the research work such as insertion of URI column, link generation, reconciliation data for external sources, conversion of source format to RDF format etc. After conversion of whole bibliographic data into RDF triple format as considerable LOD dataset. At the production page we may find a RDF file of bibliographic data. This LOD dataset may further be used by the organizations or institutions for their advanced bibliographic service
    corecore