466,894 research outputs found
Language, logic and ontology: uncovering the structure of commonsense knowledge
The purpose of this paper is twofold: (i) we argue that the structure of commonsense knowledge must be discovered, rather than invented; and (ii) we argue that natural
language, which is the best known theory of our (shared) commonsense knowledge, should itself be used as a guide to discovering the structure of commonsense knowledge. In addition to suggesting a systematic method to the discovery of the structure of commonsense knowledge, the method we propose seems to also provide an explanation for a number of phenomena in natural language, such as metaphor, intensionality, and the semantics of nominal compounds. Admittedly, our ultimate goal is quite ambitious, and it is no less than the systematic ‘discovery’ of a well-typed
ontology of commonsense knowledge, and the subsequent formulation of the longawaited goal of a meaning algebra
Natural language and the genetic code: from the semiotic analogy to biolinguistics
[Abstract] With the discovery of the DNA structure (Watson and Crick, 1953), the idea of DNA as a linguistic code arose (Monod, 1970). Many researchers have considered DNA as a language, pointing out the semiotic parallelism between genetic code and natural language. This idea had been discussed, almost dismissed and somehow accepted. This paper does not claim that the genetic code is a linguistic structure, but it highlights several important semiotic analogies between DNA and verbal language. Genetic code and natural language share a number of units, structures and operations. The syntactic and semantic parallelisms between those codes should lead to a methodological exchange between biology, linguistics and semiotics. During the 20th century, biology has become a pilot science, so that many disciplines have formulated their theories under models taken from biology. Computer science has become almost a bioinspired field thanks to the great development of natural computing and DNA computing. Biology and semiotics are two different sciences challenged by the same common goal of deciphering the codes of the nature. Linguistics could become another «bio-inspired» science by taking advantage of the structural and «semantic» similarities between the genetic code and natural language. Biological methods coming from computer science can be very useful in the field of linguistics, since they provide flexible and intuitive tools for describing natural languages. In this way, we obtain a theoretical framework where biology, linguistics and computer science exchange methods and interact, thanks to the semiotic parallelism between the genetic code a natural language. The influence of the semiotics of the genetic code in linguistics is parallel to the need of achieving an implementable formal description of natural language. In this paper we present an overview of different bio-inspired methods — from theoretical computer science — that during the last years have been successfully applied to several linguistics issues, from syntax to pragmatics
Leveraging Language Representation for Material Recommendation, Ranking, and Exploration
Data-driven approaches for material discovery and design have been
accelerated by emerging efforts in machine learning. While there is enormous
progress towards learning the structure to property relationship of materials,
methods that allow for general representations of crystals to effectively
explore the vast material search space and identify high-performance candidates
remain limited. In this work, we introduce a material discovery framework that
uses natural language embeddings derived from material science-specific
language models as representations of compositional and structural features.
The discovery framework consists of a joint scheme that, given a query
material, first recalls candidates based on representational similarity, and
ranks the candidates based on target properties through multi-task learning.
The contextual knowledge encoded in language representations is found to convey
information about material properties and structures, enabling both similarity
analysis for recall, and multi-task learning to share information for related
properties. By applying the discovery framework to thermoelectric materials, we
demonstrate diversified recommendations of prototype structures and identify
under-studied high-performance material spaces, including halide perovskite,
delafossite-like, and spinel-like structures. By leveraging material language
representations, our framework provides a generalized means for effective
material recommendation, which is task-agnostic and can be applied to various
material systems
Meemi: A Simple Method for Post-processing and Integrating Cross-lingual Word Embeddings
Word embeddings have become a standard resource in the toolset of any Natural
Language Processing practitioner. While monolingual word embeddings encode
information about words in the context of a particular language, cross-lingual
embeddings define a multilingual space where word embeddings from two or more
languages are integrated together. Current state-of-the-art approaches learn
these embeddings by aligning two disjoint monolingual vector spaces through an
orthogonal transformation which preserves the structure of the monolingual
counterparts. In this work, we propose to apply an additional transformation
after this initial alignment step, which aims to bring the vector
representations of a given word and its translations closer to their average.
Since this additional transformation is non-orthogonal, it also affects the
structure of the monolingual spaces. We show that our approach both improves
the integration of the monolingual spaces as well as the quality of the
monolingual spaces themselves. Furthermore, because our transformation can be
applied to an arbitrary number of languages, we are able to effectively obtain
a truly multilingual space. The resulting (monolingual and multilingual) spaces
show consistent gains over the current state-of-the-art in standard intrinsic
tasks, namely dictionary induction and word similarity, as well as in extrinsic
tasks such as cross-lingual hypernym discovery and cross-lingual natural
language inference.Comment: 22 pages, 2 figures, 9 tables. Preprint submitted to Natural Language
Engineerin
Recommended from our members
Unsupervised word embeddings capture latent knowledge from materials science literature.
The overwhelming majority of scientific knowledge is published as text, which is difficult to analyse by either traditional statistical analysis or modern machine learning methods. By contrast, the main source of machine-interpretable data for the materials research community has come from structured property databases1,2, which encompass only a small fraction of the knowledge present in the research literature. Beyond property values, publications contain valuable knowledge regarding the connections and relationships between data items as interpreted by the authors. To improve the identification and use of this knowledge, several studies have focused on the retrieval of information from scientific literature using supervised natural language processing3-10, which requires large hand-labelled datasets for training. Here we show that materials science knowledge present in the published literature can be efficiently encoded as information-dense word embeddings11-13 (vector representations of words) without human labelling or supervision. Without any explicit insertion of chemical knowledge, these embeddings capture complex materials science concepts such as the underlying structure of the periodic table and structure-property relationships in materials. Furthermore, we demonstrate that an unsupervised method can recommend materials for functional applications several years before their discovery. This suggests that latent knowledge regarding future discoveries is to a large extent embedded in past publications. Our findings highlight the possibility of extracting knowledge and relationships from the massive body of scientific literature in a collective manner, and point towards a generalized approach to the mining of scientific literature
- …