15,007 research outputs found

    Linked Data Model in Libraries and Future of Bibliographic Records

    Get PDF
    The ongoing development of digital culture and Web technologies has directly influenced and changed the standards, principles, rules, approaches and implementations about information organization and access. Today, the needs and trends are increasing in terms of enhancing visibility of information resources in libraries on web, integrating library data to the web, enabling different information systems to exchange data with each other, and revealing relationships / connections between data. In particular, semantic web and linked data applications provide libraries with significant opportunities to manage bibliographic records better and ensure better access to information. In this context, libraries are re-shaping services of bibliographic descrription and working hard in order to adapt to new developments. In this study, the concepts of semantic web and linked data will be explained and the effects of these concepts on libraries and the applications of linked data in libraries will be mentioned

    Knowledge organization

    Get PDF
    Since Svenonius analyzed the research base in bibliographic control in 1990, the intervening years have seen major shifts in the focus of information organization in academic libraries. New technologies continue to reshape the nature and content of catalogs, stretch the boundaries of classification research, and provide new alternatives for the organization of information. Research studies have rigorously analyzed the structure of the Anglo- American Cataloguing Rules using entity-relationship modeling and expanded on the bibliographic and authority relationship research to develop new data models (Functional Requirements for Bibliographic Records [FRBR] and Functional Requirements and Numbering of Authority Records [FRANAR]). Applied research into the information organization process has led to the development of cataloguing tools and harvesting ap- plications for bibliographic data collection and automatic record creation. A growing international perspective focused research on multilingual subject access, transliteration problems in surrogate records, and user studies to improve Online Public Access Catalog (OPAC) displays for large retrieval sets resulting from federated searches. The need to organize local and remote electronic resources led to metadata research that developed general and domain-specific metadata schemes. Ongoing research in this area focuses on record structures and architectural models to enable interoperability among the various schemes and differing application platforms. Research in the area of subject access and classification is strong, covering areas such as vocabulary mapping, automatic facet construction and deconstruction for Web resources, development of expert systems for automatic classifica- tion, dynamically altered classificatory structures linked to domain-specific thesauri, crosscultural conceptual structures in classification, identification of semantic relationships for vocabulary mapped to classification systems, and the expanded use of traditional classification systems as switching languages in the global Web environment. Finally, descriptive research into library and information science (LIS) education and curricula for knowl- edge organization continues. All of this research is applicable to knowledge organization in academic and research libraries. This chapter examines this body of research in depth, describes the research methodologies employed, and identifies areas of lacunae in need of further research

    Knowledge Organization Research in the last two decades: 1988-2008

    Get PDF
    We apply an automatic topic mapping system to records of publications in knowledge organization published between 1988-2008. The data was collected from journals publishing articles in the KO field from Web of Science database (WoS). The results showed that while topics in the first decade (1988-1997) were more traditional, the second decade (1998-2008) was marked by a more technological orientation and by the appearance of more specialized topics driven by the pervasiveness of the Web environment

    Mejorando la Ciencia Abierta Usando Datos Abiertos Enlazados: Caso de Uso CONICET Digital

    Get PDF
    Los servicios de publicación científica están cambiando drásticamente, los investigadores demandan servicios de búsqueda inteligentes para descubrir y relacionar publicaciones científicas. Los editores deben incorporar información semántica para organizar mejor sus activos digitales y hacer que las publicaciones sean más visibles. En este documento, presentamos el trabajo en curso para publicar un subconjunto de publicaciones científicas de CONICET Digital como datos abiertos enlazados. El objetivo de este trabajo es mejorar la recuperación y la reutilización de datos a través de tecnologías de Web Semántica y Datos Enlazados en el dominio de las publicaciones científicas. Para lograr estos objetivos, se han tenido en cuenta los estándares de la Web Semántica y los esquemas RDF (Dublín Core, FOAF, VoID, etc.). El proceso de conversión y publicación se basa en las pautas metodológicas para publicar datos vinculados de gobierno. También describimos como estos datos se pueden vincular a otros conjuntos de datos como DBLP, Wikidata y DBPedia. Finalmente, mostramos algunos ejemplos de consultas que responden a preguntas que inicialmente no permite CONICET Digital.Scientific publication services are changing drastically, researchers demand intelligent search services to discover and relate scientific publications. Publishersneed to incorporate semantic information to better organize their digital assets and make publications more discoverable. In this paper, we present the on-going work to publish a subset of scientific publications of CONICET Digital as Linked Open Data. The objective of this work is to improve the recovery andreuse of data through Semantic Web technologies and Linked Data in the domain of scientific publications.To achieve these goals, Semantic Web standards and reference RDF schema?s have been taken into account (Dublin Core, FOAF, VoID, etc.). The conversion and publication process is guided by the methodological guidelines for publishing government linked data. We also outline how these data can be linked to other datasets DBLP, WIKIDATA and DBPEDIA on the web of data. Finally, we show some examples of queries that answer questions that initially CONICET Digital does not allowFil: Zárate, Marcos Daniel. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Centro Nacional Patagónico. Centro para el Estudio de Sistemas Marinos; ArgentinaFil: Carlos Buckle. Universidad Nacional de la Patagonia "San Juan Bosco"; ArgentinaFil: Mazzanti, Renato. Universidad Nacional de la Patagonia "San Juan Bosco"; ArgentinaFil: Samec, Gustavo Daniel. Universidad Nacional de la Patagonia "San Juan Bosco"; Argentin

    Report of the Stanford Linked Data Workshop

    No full text
    The Stanford University Libraries and Academic Information Resources (SULAIR) with the Council on Library and Information Resources (CLIR) conducted at week-long workshop on the prospects for a large scale, multi-national, multi-institutional prototype of a Linked Data environment for discovery of and navigation among the rapidly, chaotically expanding array of academic information resources. As preparation for the workshop, CLIR sponsored a survey by Jerry Persons, Chief Information Architect emeritus of SULAIR that was published originally for workshop participants as background to the workshop and is now publicly available. The original intention of the workshop was to devise a plan for such a prototype. However, such was the diversity of knowledge, experience, and views of the potential of Linked Data approaches that the workshop participants turned to two more fundamental goals: building common understanding and enthusiasm on the one hand and identifying opportunities and challenges to be confronted in the preparation of the intended prototype and its operation on the other. In pursuit of those objectives, the workshop participants produced:1. a value statement addressing the question of why a Linked Data approach is worth prototyping;2. a manifesto for Linked Libraries (and Museums and Archives and …);3. an outline of the phases in a life cycle of Linked Data approaches;4. a prioritized list of known issues in generating, harvesting & using Linked Data;5. a workflow with notes for converting library bibliographic records and other academic metadata to URIs;6. examples of potential “killer apps” using Linked Data: and7. a list of next steps and potential projects.This report includes a summary of the workshop agenda, a chart showing the use of Linked Data in cultural heritage venues, and short biographies and statements from each of the participants

    Semantic Technologies for Manuscript Descriptions — Concepts and Visions

    Get PDF
    The contribution at hand relates recent developments in the area of the World Wide Web to codicological research. In the last number of years, an informational extension of the internet has been discussed and extensively researched: the Semantic Web. It has already been applied in many areas, including digital information processing of cultural heritage data. The Semantic Web facilitates the organisation and linking of data across websites, according to a given semantic structure. Software can then process this structural and semantic information to extract further knowledge. In the area of codicological research, many institutions are making efforts to improve the online availability of handwritten codices. If these resources could also employ Semantic Web techniques, considerable research potential could be unleashed. However, data acquisition from less structured data sources will be problematic. In particular, data stemming from unstructured sources needs to be made accessible to SemanticWeb tools through information extraction techniques. In the area of museum research, the CIDOC Conceptual Reference Model (CRM) has been widely examined and is being adopted successfully. The CRM translates well to Semantic Web research, and its concentration on contextualization of objects could support approaches in codicological research. Further concepts for the creation and management of bibliographic coherences and structured vocabularies related to the CRM will be considered in this chapter. Finally, a user scenario showing all processing steps in their context will be elaborated on

    Towards a Cloud-Based Service for Maintaining and Analyzing Data About Scientific Events

    Full text link
    We propose the new cloud-based service OpenResearch for managing and analyzing data about scientific events such as conferences and workshops in a persistent and reliable way. This includes data about scientific articles, participants, acceptance rates, submission numbers, impact values as well as organizational details such as program committees, chairs, fees and sponsors. OpenResearch is a centralized repository for scientific events and supports researchers in collecting, organizing, sharing and disseminating information about scientific events in a structured way. An additional feature currently under development is the possibility to archive web pages along with the extracted semantic data in order to lift the burden of maintaining new and old conference web sites from public research institutions. However, the main advantage is that this cloud-based repository enables a comprehensive analysis of conference data. Based on extracted semantic data, it is possible to determine quality estimations, scientific communities, research trends as well the development of acceptance rates, fees, and number of participants in a continuous way complemented by projections into the future. Furthermore, data about research articles can be systematically explored using a content-based analysis as well as citation linkage. All data maintained in this crowd-sourcing platform is made freely available through an open SPARQL endpoint, which allows for analytical queries in a flexible and user-defined way.Comment: A completed version of this paper had been accepted in SAVE-SD workshop 2017 at WWW conferenc

    Automatic Metadata Generation using Associative Networks

    Full text link
    In spite of its tremendous value, metadata is generally sparse and incomplete, thereby hampering the effectiveness of digital information services. Many of the existing mechanisms for the automated creation of metadata rely primarily on content analysis which can be costly and inefficient. The automatic metadata generation system proposed in this article leverages resource relationships generated from existing metadata as a medium for propagation from metadata-rich to metadata-poor resources. Because of its independence from content analysis, it can be applied to a wide variety of resource media types and is shown to be computationally inexpensive. The proposed method operates through two distinct phases. Occurrence and co-occurrence algorithms first generate an associative network of repository resources leveraging existing repository metadata. Second, using the associative network as a substrate, metadata associated with metadata-rich resources is propagated to metadata-poor resources by means of a discrete-form spreading activation algorithm. This article discusses the general framework for building associative networks, an algorithm for disseminating metadata through such networks, and the results of an experiment and validation of the proposed method using a standard bibliographic dataset
    corecore