128,164 research outputs found
The Materials Science Procedural Text Corpus: Annotating Materials Synthesis Procedures with Shallow Semantic Structures
Materials science literature contains millions of materials synthesis
procedures described in unstructured natural language text. Large-scale
analysis of these synthesis procedures would facilitate deeper scientific
understanding of materials synthesis and enable automated synthesis planning.
Such analysis requires extracting structured representations of synthesis
procedures from the raw text as a first step. To facilitate the training and
evaluation of synthesis extraction models, we introduce a dataset of 230
synthesis procedures annotated by domain experts with labeled graphs that
express the semantics of the synthesis sentences. The nodes in this graph are
synthesis operations and their typed arguments, and labeled edges specify
relations between the nodes. We describe this new resource in detail and
highlight some specific challenges to annotating scientific text with shallow
semantic structure. We make the corpus available to the community to promote
further research and development of scientific information extraction systems.Comment: Accepted as a long paper at the Linguistic Annotation Workshop (LAW)
at ACL 201
Synthesis of Attributed Feature Models From Product Descriptions: Foundations
Feature modeling is a widely used formalism to characterize a set of products
(also called configurations). As a manual elaboration is a long and arduous
task, numerous techniques have been proposed to reverse engineer feature models
from various kinds of artefacts. But none of them synthesize feature attributes
(or constraints over attributes) despite the practical relevance of attributes
for documenting the different values across a range of products. In this
report, we develop an algorithm for synthesizing attributed feature models
given a set of product descriptions. We present sound, complete, and
parametrizable techniques for computing all possible hierarchies, feature
groups, placements of feature attributes, domain values, and constraints. We
perform a complexity analysis w.r.t. number of features, attributes,
configurations, and domain size. We also evaluate the scalability of our
synthesis procedure using randomized configuration matrices. This report is a
first step that aims to describe the foundations for synthesizing attributed
feature models
Recommended from our members
Object-oriented views: a novel approach for tool integration in design environments (dissertation)
Object-oriented databases have been proposed to serve as the data management component of integrated design environments. One central database represents a bottleneck, however, requiring all design tools to work on the same information model and preventing the extensibility of the system over time. In this dissertation, I propose a view-based object server that successfully addresses these problems by supporting design views tailored to the needs of individual design tools.A view on an object-oriented schema corresponds to a virtual subschema graph with restructured generalization and property decomposition hierarchies. I present a methodology for supporting multiple view schemata, called MutliView. MultiView is anchored on the following four ideas: (1) the customization of individual classes using object algebra, (2) the integration of these derived classes into one global schema graoh, (3) the extraction of virtual and base classes from the global schema as required by the view, and (4) the generation of a class hierarchy for these selected view classes. MutliView's division of view specification into these well-defined tasks, some of which have been successfully automated, makes it a powerful tool for supporting the specification of views by non-database experts while enforcing view consistency.In this dissertation, I describe solutions for all four tasks underlying MultiView. For the first task, I have formulated class derivatin operators modeled after the well-known relational algebra operators. For the second task, I have developed a classification algorithm that automatically integrates derived classes into one global schema. For the third task, I have designed a view definition language that can be used to declaratively specify the view classes required for a particular view. For the last task, I have developed an algorithm that generates a complete, minimal and consistent view schema. I present proofs of correctness, complexity analysis, and numerous illustrative examples for all algorithms.MultiView is applied to address the tool integration problem in a behavioral synthesis system. For this purpose, I first develop a unified design object model for behavioral synthesis. I then formulate customized design views of this model tailored to the needs of particular design tools. The resulting system allows the design tools to work on their view of the information model, while MultiView assures the consistent integration of the diverse design data into one object model
Color segmentation and neural networks for automatic graphic relief of the state of conservation of artworks
none5noThis paper proposes a semi-automated methodology based on a sequence of analysis processes performed on multispectral images of artworks and aimed at the extraction of vector maps regarding their state of conservation. The graphic relief of the artwork represents the main instrument of communication and synthesis of information and data acquired on cultural heritage during restoration. Despite the widespread use of informatics tools, currently, these operations are still extremely subjective and require high execution times and costs. In some cases, manual execution is particularly complicated and almost impossible to carry out. The methodology proposed here allows supervised, partial automation of these procedures avoids approximations and drastically reduces the work times, as it makes a vector drawing by extracting the areas directly from the raster images. We propose a procedure for color segmentation based on principal/independent component analysis (PCA/ICA) and SOM neural networks and, as a case study, present the results obtained on a set of multispectral reproductions of a painting on canvas.openAnnamaria Amura, Anna Tonazzini, Emanuele Salerno, Stefano Pagnotta, Vincenzo PalleschiAmura, Annamaria; Tonazzini, Anna; Salerno, Emanuele; Pagnotta, Stefano; Palleschi, Vincenz
Designing algorithms to aid discovery by chemical robots
Recently, automated robotic systems have become very efficient, thanks to improved coupling between sensor systems and algorithms, of which the latter have been gaining significance thanks to the increase in computing power over the past few decades. However, intelligent automated chemistry platforms for discovery orientated tasks need to be able to cope with the unknown, which is a profoundly hard problem. In this Outlook, we describe how recent advances in the design and application of algorithms, coupled with the increased amount of chemical data available, and automation and control systems may allow more productive chemical research and the development of chemical robots able to target discovery. This is shown through examples of workflow and data processing with automation and control, and through the use of both well-used and cutting-edge algorithms illustrated using recent studies in chemistry. Finally, several algorithms are presented in relation to chemical robots and chemical intelligence for knowledge discovery
Formal Verification of Security Protocol Implementations: A Survey
Automated formal verification of security protocols has been mostly focused on analyzing high-level abstract models which, however, are significantly different from real protocol implementations written in programming languages. Recently, some researchers have started investigating techniques that bring automated formal proofs closer to real implementations. This paper surveys these attempts, focusing on approaches that target the application code that implements protocol logic, rather than the libraries that implement cryptography. According to these approaches, libraries are assumed to correctly implement some models. The aim is to derive formal proofs that, under this assumption, give assurance about the application code that implements the protocol logic. The two main approaches of model extraction and code generation are presented, along with the main techniques adopted for each approac
The Indo-U.S. Library of Coude Feed Stellar Spectra
We have obtained spectra for 1273 stars using the 0.9m Coud\'e Feed telescope
at Kitt Peak National Observatory. This telescope feeds the coud\'e
spectrograph of the 2.1m telescope. The spectra have been obtained with the #5
camera of the coud\'e spectrograph and a Loral 3K X 1K CCD. Two gratings have
been used to provide spectral coverage from 3460 \AA to 9464 \AA, at a
resolution of 1\AA FWHM and at an original dispersion of 0.44 \AA/pixel.
For 885 stars we have complete spectra over the entire 3460 \AA to 9464 \AA
wavelength region (neglecting small gaps of 50 \AA), and partial spectral
coverage for the remaining stars. The 1273 stars have been selected to provide
broad coverage of the atmospheric parameters T, log g, and [Fe/H], as
well as spectral type. The goal of the project is to provide a comprehensive
library of stellar spectra for use in the automated classification of stellar
and galaxy spectra and in galaxy population synthesis. In this paper we discuss
the characteristics of the spectral library, viz., details of the observations,
data reduction procedures, and selection of stars. We also present a few
illustrations of the quality and information available in the spectra. The
first version of the complete spectral library is now publicly available from
the National Optical Astronomy Observatory (NOAO) via FTP and HTTP.Comment: 18 pages, 6 figures, 4 table
- …