216,717 research outputs found

    Protocols for Scholarly Communication

    Get PDF
    CERN, the European Organization for Nuclear Research, has operated an institutional preprint repository for more than 10 years. The repository contains over 850,000 records of which more than 450,000 are full-text OA preprints, mostly in the field of particle physics, and it is integrated with the library's holdings of books, conference proceedings, journals and other grey literature. In order to encourage effective propagation and open access to scholarly material, CERN is implementing a range of innovative library services into its document repository: automatic keywording, reference extraction, collaborative management tools and bibliometric tools. Some of these services, such as user reviewing and automatic metadata extraction, could make up an interesting testbed for future publishing solutions and certainly provide an exciting environment for e-science possibilities. The future protocol for scientific communication should naturally guide authors towards OA publication and CERN wants to help reach a full open access publishing environment for the particle physics community and the related sciences in the next few years.Comment: 8 pages, to appear in Library and Information Systems in Astronomy

    A Video Processing and Data Retrieval Framework for Fish Population Monitoring

    Get PDF
    In this work we present a framework for fish population monitoring through the analysis of underwater videos. We specifically focus on the user information needs, and on the dynamic data extraction and retrieval mechanisms that support them. Sophisticated though a software tool may be, it is ultimately important that its interface satisfies users' actual needs and that users can easily focus on the specific data of interest. In the case of fish population monitoring, marine biologists have to interact with a system which not only provides information from a biological point of view, but also offers instruments to let them guide the video processing task for both video and algorithm selection. This paper aims at describing the system's underlying video processing and workflow low-level details, and their connection to the user interface for on-demand data retrieval by biologists

    A video processing and data retrieval framework for fish population monitoring

    Get PDF
    htmlabstractIn this work we present a framework for fish population monitoring through the analysis of underwater videos. We specifically focus on the user information needs, and on the dynamic data extraction and retrieval mechanisms that support them. Sophisticated though a software tool may be, it is ultimately important that its interface satisfies users' actual needs and that users can easily focus on the specific data of interest. In the case of fish population monitoring, marine biologists have to interact with a system which not only provides information from a biological point of view, but also offers instruments to let them guide the video processing task for both video and algorithm selection. This paper aims at describing the system's underlying video processing and workflow low-level details, and their connection to the user interface for on-demand data retrieval by biologists

    MASADA USER GUIDE

    Get PDF
    This user guide accompanies the MASADA tool which is a public tool for the detection of built-up areas from remote sensing data. MASADA stands for Massive Spatial Automatic Data Analytics. It has been developed in the frame of the “Global Human Settlement Layer” (GHSL) project of the European Commission’s Joint Research Centre, with the overall objective to support the production of settlement layers at regional scale, by processing high and very high resolution satellite imagery. The tool builds on the Symbolic Machine Learning (SML) classifier; a supervised classification method of remotely sensed data which allows extracting built-up information using a coarse resolution settlement map or a land cover information for learning the classifier. The image classification workflow incorporates radiometric, textural and morphological features as inputs for information extraction. Though being originally developed for built-up areas extraction, the SML classifier is a multi-purpose classifier that can be used for general land cover mapping provided there is an appropriate training data set. The tool supports several types of multispectral optical imagery. It includes ready-to-use workflows for specific sensors, but at the same time, it allows the parametrization and customization of the workflow by the user. Currently it includes predefined workflows for SPOT-5, SPOT-6/7, RapidEye and CBERS-4, but it was also tested with various high and very high resolution1 sensors like GeoEye-1, WorldView-2/3, PlĂ©iades and Quickbird.JRC.E.1-Disaster Risk Managemen

    Using 3-D Shape Models to Guide Segmentation of MR Brain Images

    Get PDF
    Accurate segmentation of medical images poses one of the major challenges in computer vision. Approaches that rely solely on intensity information frequently fail because similar intensity values appear in multiple structures. This paper presents a method for using shape knowledge to guide the segmentation process, applying it to the task of finding the surface of the brain. A 3-D model that includes local shape constraints is fitted to an MR volume dataset. The resulting low-resolution surface is used to mask out regions far from the cortical surface, enabling an isosurface extraction algorithm to isolate a more detailed surface boundary. The surfaces generated by this technique are comparable to those achieved by other methods, without requiring user adjustment of a large number of ad hoc parameters

    Towards Customizable Chart Visualizations of Tabular Data Using Knowledge Graphs

    Get PDF
    Scientific articles are typically published as PDF documents, thus rendering the extraction and analysis of results a cumbersome, error-prone, and often manual effort. New initiatives, such as ORKG, focus on transforming the content and results of scientific articles into structured, machine-readable representations using Semantic Web technologies. In this article, we focus on tabular data of scientific articles, which provide an organized and compressed representation of information. However, chart visualizations can additionally facilitate their comprehension. We present an approach that employs a human-in-the-loop paradigm during the data acquisition phase to define additional semantics for tabular data. The additional semantics guide the creation of chart visualizations for meaningful representations of tabular data. Our approach organizes tabular data into different information groups which are analyzed for the selection of suitable visualizations. The set of suitable visualizations serves as a user-driven selection of visual representations. Additionally, customization for visual representations provides the means for facilitating the understanding and sense-making of information

    Relating Developers’ Concepts and Artefact Vocabulary in a Financial Software Module

    Get PDF
    Developers working on unfamiliar systems are challenged to accurately identify where and how high-level concepts are implemented in the source code. Without additional help, concept location can become a tedious, time-consuming and error-prone task. In this paper we study an industrial financial application for which we had access to the user guide, the source code, and some change requests. We compared the relative importance of the domain concepts, as understood by developers, in the user manual and in the source code. We also searched the code for the concepts occurring in change requests, to see if they could point developers to code to be modified. We varied the searches (using exact and stem matching, discarding stop-words, etc.) and present the precision and recall. We discuss the implication of our results for maintenance

    AVIR – Audio-Visual Indexing and Retrieval for Non IT Expert Users

    Get PDF
    The AVIR proposal originates from the demand for new solutions allowing common users to easily access, store and retrieve relevant audio-visual information from the vast amounts of resources at their disposal. The next generation of television systems will be connected to many sources of information and entertainment (TV-and radio from air, cable or satellite, video and audio libraries, video tape/disk recorders, Internet). Literally hundreds of channels will soon be offered to the user, which could be disoriented by this overload of information. Users will not pay for just more extra channels, but will appreciate if the content in the channels is easily accessible and, more importantly, can be easily selected according to the user's personal interest. This can only be achieved if the broadcaster delivers meta-data describing the actual content in sufficient detail enabling for automatic handling by agents residing on the end user's system. AVIR investigates on novel procedures for automatic analysis and indexing of audio-visual information, specifically meant to support consumer services. The objective of this project is to investigate and experiment end-to-end solutions for delivering new added value services on top of digital video broadcast services, which will enable a better exploitation of multimedia information resources by non-IT experts. As a result the project is building a prototype service user platform and will demonstrate its feasibility on a broadcast delivery chain. It takes into account extraction of high quality meta-data and electronic delivery of meta-data associated to audio-visual content, including adaptation of consumer receivers and recorders towards a personalized multimedia repository. Intelligent agents based on a user interest profile will help the user to browse and access most relevant programmes via an intelligent, personal electronic guide. A low cost, high capacity home storage device, will also be used to increment the capabilities of the consumer system. Thanks to the received descriptors, advanced retrieval features can be implemented on the stored assets and, in combination with the user’s profile, automatic recording feature is possible. A visual navigation system, a search engine and agents will help the user identifyvideo material of interest on the home video-recorder, transfo rming it into a personal multimedia repository

    Artequakt: Generating tailored biographies from automatically annotated fragments from the web

    Get PDF
    The Artequakt project seeks to automatically generate narrativebiographies of artists from knowledge that has been extracted from the Web and maintained in a knowledge base. An overview of the system architecture is presented here and the three key components of that architecture are explained in detail, namely knowledge extraction, information management and biography construction. Conclusions are drawn from the initial experiences of the project and future progress is detailed
    • 

    corecore