128,164 research outputs found

    The Materials Science Procedural Text Corpus: Annotating Materials Synthesis Procedures with Shallow Semantic Structures

    Full text link
    Materials science literature contains millions of materials synthesis procedures described in unstructured natural language text. Large-scale analysis of these synthesis procedures would facilitate deeper scientific understanding of materials synthesis and enable automated synthesis planning. Such analysis requires extracting structured representations of synthesis procedures from the raw text as a first step. To facilitate the training and evaluation of synthesis extraction models, we introduce a dataset of 230 synthesis procedures annotated by domain experts with labeled graphs that express the semantics of the synthesis sentences. The nodes in this graph are synthesis operations and their typed arguments, and labeled edges specify relations between the nodes. We describe this new resource in detail and highlight some specific challenges to annotating scientific text with shallow semantic structure. We make the corpus available to the community to promote further research and development of scientific information extraction systems.Comment: Accepted as a long paper at the Linguistic Annotation Workshop (LAW) at ACL 201

    Synthesis of Attributed Feature Models From Product Descriptions: Foundations

    Get PDF
    Feature modeling is a widely used formalism to characterize a set of products (also called configurations). As a manual elaboration is a long and arduous task, numerous techniques have been proposed to reverse engineer feature models from various kinds of artefacts. But none of them synthesize feature attributes (or constraints over attributes) despite the practical relevance of attributes for documenting the different values across a range of products. In this report, we develop an algorithm for synthesizing attributed feature models given a set of product descriptions. We present sound, complete, and parametrizable techniques for computing all possible hierarchies, feature groups, placements of feature attributes, domain values, and constraints. We perform a complexity analysis w.r.t. number of features, attributes, configurations, and domain size. We also evaluate the scalability of our synthesis procedure using randomized configuration matrices. This report is a first step that aims to describe the foundations for synthesizing attributed feature models

    Color segmentation and neural networks for automatic graphic relief of the state of conservation of artworks

    Get PDF
    none5noThis paper proposes a semi-automated methodology based on a sequence of analysis processes performed on multispectral images of artworks and aimed at the extraction of vector maps regarding their state of conservation. The graphic relief of the artwork represents the main instrument of communication and synthesis of information and data acquired on cultural heritage during restoration. Despite the widespread use of informatics tools, currently, these operations are still extremely subjective and require high execution times and costs. In some cases, manual execution is particularly complicated and almost impossible to carry out. The methodology proposed here allows supervised, partial automation of these procedures avoids approximations and drastically reduces the work times, as it makes a vector drawing by extracting the areas directly from the raster images. We propose a procedure for color segmentation based on principal/independent component analysis (PCA/ICA) and SOM neural networks and, as a case study, present the results obtained on a set of multispectral reproductions of a painting on canvas.openAnnamaria Amura, Anna Tonazzini, Emanuele Salerno, Stefano Pagnotta, Vincenzo PalleschiAmura, Annamaria; Tonazzini, Anna; Salerno, Emanuele; Pagnotta, Stefano; Palleschi, Vincenz

    Designing algorithms to aid discovery by chemical robots

    Get PDF
    Recently, automated robotic systems have become very efficient, thanks to improved coupling between sensor systems and algorithms, of which the latter have been gaining significance thanks to the increase in computing power over the past few decades. However, intelligent automated chemistry platforms for discovery orientated tasks need to be able to cope with the unknown, which is a profoundly hard problem. In this Outlook, we describe how recent advances in the design and application of algorithms, coupled with the increased amount of chemical data available, and automation and control systems may allow more productive chemical research and the development of chemical robots able to target discovery. This is shown through examples of workflow and data processing with automation and control, and through the use of both well-used and cutting-edge algorithms illustrated using recent studies in chemistry. Finally, several algorithms are presented in relation to chemical robots and chemical intelligence for knowledge discovery

    Formal Verification of Security Protocol Implementations: A Survey

    Get PDF
    Automated formal verification of security protocols has been mostly focused on analyzing high-level abstract models which, however, are significantly different from real protocol implementations written in programming languages. Recently, some researchers have started investigating techniques that bring automated formal proofs closer to real implementations. This paper surveys these attempts, focusing on approaches that target the application code that implements protocol logic, rather than the libraries that implement cryptography. According to these approaches, libraries are assumed to correctly implement some models. The aim is to derive formal proofs that, under this assumption, give assurance about the application code that implements the protocol logic. The two main approaches of model extraction and code generation are presented, along with the main techniques adopted for each approac

    The Indo-U.S. Library of Coude Feed Stellar Spectra

    Get PDF
    We have obtained spectra for 1273 stars using the 0.9m Coud\'e Feed telescope at Kitt Peak National Observatory. This telescope feeds the coud\'e spectrograph of the 2.1m telescope. The spectra have been obtained with the #5 camera of the coud\'e spectrograph and a Loral 3K X 1K CCD. Two gratings have been used to provide spectral coverage from 3460 \AA to 9464 \AA, at a resolution of \sim1\AA FWHM and at an original dispersion of 0.44 \AA/pixel. For 885 stars we have complete spectra over the entire 3460 \AA to 9464 \AA wavelength region (neglecting small gaps of << 50 \AA), and partial spectral coverage for the remaining stars. The 1273 stars have been selected to provide broad coverage of the atmospheric parameters Teff_{eff}, log g, and [Fe/H], as well as spectral type. The goal of the project is to provide a comprehensive library of stellar spectra for use in the automated classification of stellar and galaxy spectra and in galaxy population synthesis. In this paper we discuss the characteristics of the spectral library, viz., details of the observations, data reduction procedures, and selection of stars. We also present a few illustrations of the quality and information available in the spectra. The first version of the complete spectral library is now publicly available from the National Optical Astronomy Observatory (NOAO) via FTP and HTTP.Comment: 18 pages, 6 figures, 4 table
    corecore