25,722 research outputs found

    Effective Creation of Self-Referencing Citation Records

    Get PDF
    Acquiring citation records from online resources has become a popular approach to building bibliography for one's publication. LaTeX document preparation system is the most popular platform for typesetting publications in academia. It uses BibTeX as a tool used to describe and process lists of references. In this article we present a simple method that allows to automatically create a full self-referencing citation record for a collection of papers typeset and published within one proceedings of a conference. This greatly facilitates access to the bibliography entries for anyone who wishes to use them as part of their own publication.Získávání citačních záznamů elektronických zdrojů se stalo standardním požadavkem na vydavatele. V akademickém světě jsou standardními nástroji LaTeX pro sazbu a BibTeX pro práci s citacemi. V článku je prezentována jednoduchá metoda pro automatické vytvoření citační databáze v BibTeXu pro souborné dílo (sborník a.p.) sázené v LaTeXu. To usnadňuje další citovanost vytvořeného díla a článků v něm jejich autory

    Community next steps for making globally unique identifiers work for biocollections data

    Get PDF
    Biodiversity data is being digitized and made available online at a rapidly increasing rate but current practices typically do not preserve linkages between these data, which impedes interoperation, provenance tracking, and assembly of larger datasets. For data associated with biocollections, the biodiversity community has long recognized that an essential part of establishing and preserving linkages is to apply globally unique identifiers at the point when data are generated in the field and to persist these identifiers downstream, but this is seldom implemented in practice. There has neither been coalescence towards one single identifier solution (as in some other domains), nor even a set of recommended best practices and standards to support multiple identifier schemes sharing consistent responses. In order to further progress towards a broader community consensus, a group of biocollections and informatics experts assembled in Stockholm in October 2014 to discuss community next steps to overcome current roadblocks. The workshop participants divided into four groups focusing on: identifier practice in current field biocollections; identifier application for legacy biocollections; identifiers as applied to biodiversity data records as they are published and made available in semantically marked-up publications; and cross-cutting identifier solutions that bridge across these domains. The main outcome was consensus on key issues, including recognition of differences between legacy and new biocollections processes, the need for identifier metadata profiles that can report information on identifier persistence missions, and the unambiguous indication of the type of object associated with the identifier. Current identifier characteristics are also summarized, and an overview of available schemes and practices is provided

    Telescope Bibliographies: an Essential Component of Archival Data Management and Operations

    Full text link
    Assessing the impact of astronomical facilities rests upon an evaluation of the scientific discoveries which their data have enabled. Telescope bibliographies, which link data products with the literature, provide a way to use bibliometrics as an impact measure for the underlying data. In this paper we argue that the creation and maintenance of telescope bibliographies should be considered an integral part of an observatory's operations. We review the existing tools, services, and workflows which support these curation activities, giving an estimate of the effort and expertise required to maintain an archive-based telescope bibliography.Comment: 10 pages, 3 figures, to appear in SPIE Astronomical Telescopes and Instrumentation, SPIE Conference Series 844

    mSpace meets EPrints: a Case Study in Creating Dynamic Digital Collections

    No full text
    In this case study we look at issues involved in (a) generating dynamic digital libraries that are on a particular topic but span heterogeneous collections at distinct sites, (b) supplementing the artefacts in that collection with additional information available either from databases at the artefact's home or from the Web at large, and (c) providing an interaction paradigm that will support effective exploration of this new resource. We describe how we used two available frameworks, mSpace and EPrints to support this kind of collection building. The result of the study is a set of recommendations to improve the connectivity of remote resources both to one another and to related Web resources, and that will also reduce problems like co-referencing in order to enable the creation of new collections on demand

    Mejorando la Ciencia Abierta Usando Datos Abiertos Enlazados: Caso de Uso CONICET Digital

    Get PDF
    Los servicios de publicación científica están cambiando drásticamente, los investigadores demandan servicios de búsqueda inteligentes para descubrir y relacionar publicaciones científicas. Los editores deben incorporar información semántica para organizar mejor sus activos digitales y hacer que las publicaciones sean más visibles. En este documento, presentamos el trabajo en curso para publicar un subconjunto de publicaciones científicas de CONICET Digital como datos abiertos enlazados. El objetivo de este trabajo es mejorar la recuperación y la reutilización de datos a través de tecnologías de Web Semántica y Datos Enlazados en el dominio de las publicaciones científicas. Para lograr estos objetivos, se han tenido en cuenta los estándares de la Web Semántica y los esquemas RDF (Dublín Core, FOAF, VoID, etc.). El proceso de conversión y publicación se basa en las pautas metodológicas para publicar datos vinculados de gobierno. También describimos como estos datos se pueden vincular a otros conjuntos de datos como DBLP, Wikidata y DBPedia. Finalmente, mostramos algunos ejemplos de consultas que responden a preguntas que inicialmente no permite CONICET Digital.Scientific publication services are changing drastically, researchers demand intelligent search services to discover and relate scientific publications. Publishersneed to incorporate semantic information to better organize their digital assets and make publications more discoverable. In this paper, we present the on-going work to publish a subset of scientific publications of CONICET Digital as Linked Open Data. The objective of this work is to improve the recovery andreuse of data through Semantic Web technologies and Linked Data in the domain of scientific publications.To achieve these goals, Semantic Web standards and reference RDF schema?s have been taken into account (Dublin Core, FOAF, VoID, etc.). The conversion and publication process is guided by the methodological guidelines for publishing government linked data. We also outline how these data can be linked to other datasets DBLP, WIKIDATA and DBPEDIA on the web of data. Finally, we show some examples of queries that answer questions that initially CONICET Digital does not allowFil: Zárate, Marcos Daniel. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Centro Nacional Patagónico. Centro para el Estudio de Sistemas Marinos; ArgentinaFil: Carlos Buckle. Universidad Nacional de la Patagonia "San Juan Bosco"; ArgentinaFil: Mazzanti, Renato. Universidad Nacional de la Patagonia "San Juan Bosco"; ArgentinaFil: Samec, Gustavo Daniel. Universidad Nacional de la Patagonia "San Juan Bosco"; Argentin

    CURBING PLAGIARISM: STRATEGIES ADOPTED BY LIBRARIANS IN UNIVERSITIES IN THE AGE OF ICT IN NIGERIA

    Get PDF
    This study investigated the strategies for curbing plagiarism by librarians in universities in the age of Information and Communication Technology in Nigeria. . The population comprised librarians in Universities in Nigeria. However, a total number of 15 Universities took part in the study and a random sampling technique was used to select 89 respondents. A self-constructed online questionnaire was designed and distributed to collect the data from the respondents. Nevertheless, 75 online questionnaires were valid and used for analysis. The data was analysed using frequency of the simple percentages and mean scores. The result shows that librarians were aware of strategies for curbing plagiarism. They employ strategies such as; proper referencing of all cited works, Use free online plagiarism software checker and Paraphrase works with software. They encounter challenges such as; Unavailability of authors’ detailed information, Lack of information sources retrieval skills and incomplete bibliographic details of sources of information. It was therefore, recommended among others that Librarian should work with IT personnel to invent new and cost effective software that will effectively carry out checks on plagiarism, that Librarians should conduct training and retraining on information search and retrieval skills

    Introduction: looking beyond the walls

    Get PDF
    In its consideration of the remarkable extent and variety of non-university researchers, this book takes a broader view of ‘knowledge’ and ‘research’ than in the many hot debates about today’s knowledge society, ‘learning age’, or organisation of research. It goes beyond the commonly held image of ‘knowledge’ as something produced and owned by the full-time experts to take a look at those engaged in active knowledge building outside the university walls

    Preparing Laboratory and Real-World EEG Data for Large-Scale Analysis: A Containerized Approach.

    Get PDF
    Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain-computer interface models. However, the absence of standardized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the difficulty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a "containerized" approach and freely available tools we have developed to facilitate the process of annotating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-)analysis. The EEG Study Schema (ESS) comprises three data "Levels," each with its own XML-document schema and file/folder convention, plus a standardized (PREP) pipeline to move raw (Data Level 1) data to a basic preprocessed state (Data Level 2) suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are increasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at www.eegstudy.org and a central catalog of over 850 GB of existing data in ESS format is available at studycatalog.org. These tools and resources are part of a larger effort to enable data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org)
    corecore