644 research outputs found

    Liquid Journals: Knowledge Dissemination in the Web Era

    Get PDF
    In this paper we redefine the notion of "scientific journal" to update it to the age of the Web. We explore the historical reasons behind the current journal model, and we show that this model is essentially the same today, even if the Web has made dissemination essentially free. We propose a notion of liquid and personal journals that evolve continuously in time and that are targeted to serve individuals or communities of arbitrarily small or large scales. The liquid journals provide "interesting" content, in the form of "scientific contributions" that are "related" to a certain paper, topic, or area, and that are posted (on their web site, repositories, traditional journals) by "inspiring" researchers. As such, the liquid journal separates the notion of "publishing" (which can be achieved by submitting to traditional peer review journals or just by posting content on the Web) from the appearance of contributions into the journals, which are essentially collections of content. In this paper we introduce the liquid journal model, and demonstrate through some examples its value to individuals and communities. Finally, we describe an architecture and a working prototype that implements the proposed model

    Access to Digital Cultural Heritage: Innovative Applications of Automated Metadata Generation Chapter 1: Digitization of Cultural Heritage – Standards, Institutions, Initiatives

    Get PDF
    The first chapter "Digitization of Cultural Heritage – Standards, Institutions, Initiatives" provides an introduction to the area of digitisation. The main pillars of process of creating, preserving and accessing of cultural heritage in digital space are observed. The importance of metadata in the process of accessing to information is outlined. The metadata schemas and standards used in cultural heritage are discussed. In order to reach digital objects in virtual space they are organized in digital libraries. Contemporary digital libraries are trying to deliver richer and better functionality, which usually is user oriented and depending on current IT trend. Additionally, the chapter is focused on some initiatives on world and European level that during the years enforce the process of digitization and organizing digital objects in the cultural heritage domain. In recent years, the main focus in the creation of digital resources shifts from "system-centred" to "user-centred" since most of the issues around this content are related to making it accessible and usable for the real users. So, the user studies and involving the users on early stages of design and planning the functionality of the product which is being developed stands on leading position

    Tools and collaborative environments for bioinformatics research

    Get PDF
    Advanced research requires intensive interaction among a multitude of actors, often possessing different expertise and usually working at a distance from each other. The field of collaborative research aims to establish suitable models and technologies to properly support these interactions. In this article, we first present the reasons for an interest of Bioinformatics in this context by also suggesting some research domains that could benefit from collaborative research. We then review the principles and some of the most relevant applications of social networking, with a special attention to networks supporting scientific collaboration, by also highlighting some critical issues, such as identification of users and standardization of formats. We then introduce some systems for collaborative document creation, including wiki systems and tools for ontology development, and review some of the most interesting biological wikis. We also review the principles of Collaborative Development Environments for software and show some examples in Bioinformatics. Finally, we present the principles and some examples of Learning Management Systems. In conclusion, we try to devise some of the goals to be achieved in the short term for the exploitation of these technologies

    Science of Digital Libraries(SciDL)

    Get PDF
    Our purpose is to ensure that people and institutions better manage information through digital libraries (DLs). Thus we address a fundamental human and social need, which is particularly urgent in the modern Information (and Knowledge) Age. Our goal is to significantly advance both the theory and state-of-theart of DLs (and other advanced information systems) - thoroughly validating our approach using highly visible testbeds. Our research objective is to leverage our formal, theory-based approach to the problems of defining, understanding, modeling, building, personalizing, and evaluating DLs. We will construct models and tools based on that theory so organizations and individuals can easily create and maintain fully functional DLs, whose components can interoperate with corresponding components of related DLs. This research should be highly meritorious intellectually. We bring together a team of senior researchers with expertise in information retrieval, human-computer interaction, scenario-based design, personalization, and componentized system development and expect to make important contributions in each of those areas. Of crucial import, however, is that we will integrate our prior research and experience to achieve breakthrough advances in the field of DLs, regarding theory, methodology, systems, and evaluation. We will extend the 5S theory, which has identified five key dimensions or onstructs underlying effective DLs: Streams, Structures, Spaces, Scenarios, and Societies. We will use that theory to describe and develop metamodels, models, and systems, which can be tailored to disciplines and/or groups, as well as personalized. We will disseminate our findings as well as provide toolkits as open source software, encouraging wide use. We will validate our work using testbeds, ensuring broad impact. We will put powerful tools into the hands of digital librarians so they may easily plan and configure tailored systems, to support an extensible set of services, including publishing, discovery, searching, browsing, recommending, and access control, handling diverse types of collections, and varied genres and classes of digital objects. With these tools, end-users will for be able to design personal DLs. Testbeds are crucial to validate scientific theories and will be thoroughly integrated into SciDL research and evaluation. We will focus on two application domains, which together should allow comprehensive validation and increase the significance of SciDL's impact on scholarly communities. One is education (through CITIDEL); the other is libraries (through DLA and OCKHAM). CITIDEL deals with content from publishers (e.g, ACM Digital Library), corporate research efforts e.g., CiteSeer), volunteer initiatives (e.g., DBLP, based on the database and logic rogramming literature), CS departments (e.g., NCSTRL, mostly technical reports), educational initiatives (e.g., Computer Science Teaching Center), and universities (e.g., theses and dissertations). DLA is a unit of the Virginia Tech library that virtually publishes scholarly communication such as faculty-edited journals and rare and unique resources including image collections and finding aids from Special Collections. The OCKHAM initiative, calling for simplicity in the library world, emphasizes a three-part solution: lightweightprotocols, component-based development, and open reference models. It provides a framework to research the deployment of the SciDL approach in libraries. Thus our choice of testbeds also will nsure that our research will have additional benefit to and impact on the fields of computing and library and information science, supporting transformations in how we learn and deal with information

    Sharing brain mapping statistical results with the neuroimaging data model

    Get PDF
    Only a tiny fraction of the data and metadata produced by an fMRI study is finally conveyed to the community. This lack of transparency not only hinders the reproducibility of neuroimaging results but also impairs future meta-analyses. In this work we introduce NIDM-Results, a format specification providing a machine-readable description of neuroimaging statistical results along with key image data summarising the experiment. NIDM-Results provides a unified representation of mass univariate analyses including a level of detail consistent with available best practices. This standardized representation allows authors to relay methods and results in a platform-independent regularized format that is not tied to a particular neuroimaging software package. Tools are available to export NIDM-Result graphs and associated files from the widely used SPM and FSL software packages, and the NeuroVault repository can import NIDM-Results archives. The specification is publically available at: http://nidm.nidash.org/specs/nidm-results.html

    Developing a service endpoint to integrate semantic collection data from botanical databases and other information systems

    Get PDF
    The digitization of botanical collections has increasingly brought biodiversity research activities online. In order to make these data usable in the most efficient way, various obstacles have to be overcome. One such obstacle is a lack of ability to integrate information from other sources. While agreed upon, machine-understandable data standards such as ABCD have resulted in concepts that can already be described semantically, yet they are often transmitted as free-text information. The utilization of identifers for collectors has created opportunities for the integration of data from external information systems. However, since the identifers used are not standardized and vary from institution to institution, this work aims to develop a web service demonstrating that this problem can be overcome by applying appropriate Linked Data methods on centralized knowledge bases such as Wikidata. After eliciting requirements from participating CETAF institutions, an API was designed and implemented on this basis that can integrate biographic, bibliographic, and collection data into a single semantic file format by leveraging multiple endpoints. Thus, the work shows that diverse identifers used in collection databases do not have to be a problem. Moreover, missing IDs for important information sources such as Wikidata can be found and used. Heterogeneous data from different sources can be merged using previously defined mappings, although such data may not be available in semantic formats. Further sources of information could thus be added in the future. Furthermore, a future focus on annotated geographic identifers is also conceivable to additionally integrate semantic data on collection object found locations

    I'll take that to go:Big data bags and minimal identifiers for exchange of large, complex datasets

    Get PDF
    Big data workflows often require the assembly and exchange of complex, multi-element datasets. For example, in biomedical applications, the input to an analytic pipeline can be a dataset consisting thousands of images and genome sequences assembled from diverse repositories, requiring a description of the contents of the dataset in a concise and unambiguous form. Typical approaches to creating datasets for big data workflows assume that all data reside in a single location, requiring costly data marshaling and permitting errors of omission and commission because dataset members are not explicitly specified. We address these issues by proposing simple methods and tools for assembling, sharing, and analyzing large and complex datasets that scientists can easily integrate into their daily workflows. These tools combine a simple and robust method for describing data collections (BDBags), data descriptions (Research Objects), and simple persistent identifiers (Minids) to create a powerful ecosystem of tools and services for big data analysis and sharing. We present these tools and use biomedical case studies to illustrate their use for the rapid assembly, sharing, and analysis of large datasets

    Knowledge-based Biomedical Data Science 2019

    Full text link
    Knowledge-based biomedical data science (KBDS) involves the design and implementation of computer systems that act as if they knew about biomedicine. Such systems depend on formally represented knowledge in computer systems, often in the form of knowledge graphs. Here we survey the progress in the last year in systems that use formally represented knowledge to address data science problems in both clinical and biological domains, as well as on approaches for creating knowledge graphs. Major themes include the relationships between knowledge graphs and machine learning, the use of natural language processing, and the expansion of knowledge-based approaches to novel domains, such as Chinese Traditional Medicine and biodiversity.Comment: Manuscript 43 pages with 3 tables; Supplemental material 43 pages with 3 table
    corecore