1,638 research outputs found

    Fuzzy Natural Logic in IFSA-EUSFLAT 2021

    Get PDF
    The present book contains five papers accepted and published in the Special Issue, “Fuzzy Natural Logic in IFSA-EUSFLAT 2021”, of the journal Mathematics (MDPI). These papers are extended versions of the contributions presented in the conference “The 19th World Congress of the International Fuzzy Systems Association and the 12th Conference of the European Society for Fuzzy Logic and Technology jointly with the AGOP, IJCRS, and FQAS conferences”, which took place in Bratislava (Slovakia) from September 19 to September 24, 2021. Fuzzy Natural Logic (FNL) is a system of mathematical fuzzy logic theories that enables us to model natural language terms and rules while accounting for their inherent vagueness and allows us to reason and argue using the tools developed in them. FNL includes, among others, the theory of evaluative linguistic expressions (e.g., small, very large, etc.), the theory of fuzzy and intermediate quantifiers (e.g., most, few, many, etc.), and the theory of fuzzy/linguistic IF–THEN rules and logical inference. The papers in this Special Issue use the various aspects and concepts of FNL mentioned above and apply them to a wide range of problems both theoretically and practically oriented. This book will be of interest for researchers working in the areas of fuzzy logic, applied linguistics, generalized quantifiers, and their applications

    Application of Analogical Reasoning for Use in Visual Knowledge Extraction

    Get PDF
    There is a continual push to make Artificial Intelligence (AI) as human-like as possible; however, this is a difficult task because of its inability to learn beyond its current comprehension. Analogical reasoning (AR) has been proposed as one method to achieve this goal. Current literature lacks a technical comparison on psychologically-inspired and natural-language-processing-produced AR algorithms with consistent metrics on multiple-choice word-based analogy problems. Assessment is based on “correctness” and “goodness” metrics. There is not a one-size-fits-all algorithm for all textual problems. As contribution in visual AR, a convolutional neural network (CNN) is integrated with the AR vector space model, Global Vectors (GloVe), in the proposed, Image Recognition Through Analogical Reasoning Algorithm (IRTARA). Given images outside of the CNN’s training data, IRTARA produces contextual information by leveraging semantic information from GloVe. IRTARA’s quality of results is measured by definition, AR, and human factors evaluation methods, which saw consistency at the extreme ends. The research shows the potential for AR to facilitate more a human-like AI through its ability to understand concepts beyond its foundational knowledge in both a textual and visual problem space

    Algebraic dependency grammar

    Get PDF
    We propose a mathematical formalism called Algebraic Dependency Grammar with applications to formal linguistics and to formal language theory. Regarding formal linguistics we aim to address the problem of grammaticality with special attention to cross-linguistic cases. In the field of formal language theory this formalism provides a new perspective allowing an algebraic classification of languages. Notably our approach suggests the existence of so-called anti-classes of languages associated to certain classes of languages. Our notion of a dependency grammar is as of a definition of a set of well-constructed dependency trees (we call this algebraic governance) and a relation which associates word-orders to dependency trees (we call this algebraic linearization). In relation to algebraic governance, we define a manifold which is a set of dependency trees satisfying an agreement condition throughout a pattern, which is the algebraic form of a collection of syntactic addresses over the dependency tree. A boolean condition on the words formalizes the notion of agreement. In relation to algebraic linearization, first we observe that the notion of projectivity is quintessentially that certain substructures of a dependency tree always form an interval in its linearization. So we have to establish well what is a substructure; we see again that patterns proportion the key, generalizing the notion of projectivity with recursive linearization procedures. Combining the above modules we have the formalism: an algebraic dependency grammar is a manifold together with a linearization. Notice that patterns sustain both manifolds and linearizations. We study their interrelation in terms of a new algebraic classification of classes of languages. We highlight the main contributions of the thesis. Regarding mathematical linguistics, algebraic dependency grammar considers trees and word-order different modules in the architecture, which allows description of languages with varied word-order. Ellipses are permitted; this issue is usually avoided because it makes some formalisms non-decidable. We differentiate linguistic phenomena structurally by their algebraic description. Algebraic dependency grammar permits observance of affinity between linguistic constructions which seem superficially different. Regarding formal language theory, a new system for understanding a very large family of languages is presented which permits observation of languages in broader contexts. We identify a new class named anti-context-free languages containing constructions structurally symmetric to context-free languages. Informally we could say that context-free languages are well-parenthesized, while anti-context-free languages are cross-serial-parenthesized. For example copy languages and respectively languages are anti-context-free.Es proposa un formalisme matemàtic anomenat Gramàtica de Dependències Algebraica amb aplicacions a la lingüística formal i a la teoria de llenguatges formals. Pel que fa a la lingüística formal es pretén abordar el problema de la gramaticalitat, amb un èmfasi especial en la transversalitat, això és, que el formalisme sigui apte per a un bon nombre de llengües. En el camp dels llenguatges formals aquest formalisme proporciona una nova perspectiva que permet una classificació algebraica dels llenguatges. Aquest enfocament suggereix a més a més l'existència de les aquí anomenades anti-classes de llenguatges associades a certes classes de llenguatges. La nostra idea d'una gramàtica de dependències és en un conjunt de sintagmes ben construïts (d'això en diem recció algebraica) i una relació que associa ordres de paraules als sintagmes d'aquest conjunt (d'això en diem linearització algebraica). Pel que fa a la recció algebraica, introduïm el concepte de varietat sintàctica com el conjunt de sintagmes que satisfan una concordança sobre un determinat patró. Un patró és un conjunt d'adreces sintàctiques descrit algebraicament. La concordança es formalitza a través d'una condició booleana sobre el vocabulari. En relació amb linearització algebraica, en primer lloc, observem que l'essencial de la noció clàssica de projectivitat rau en el fet que certes subestructures d'un arbre de dependències formen sempre un interval en la seva linearització. Així doncs, primer hem d'establir bé que vol dir subestructura. Un cop més veiem que els patrons en proporcionen la clau, tot generalitzant la noció de projectivitat a través d'un procediment recursiu de linearització. Tot unint els dos mòduls anteriors ja tenim el nostre formalisme a punt: una gramàtica de dependències algebraica és una varietat sintàctica juntament amb una linearització. Notem que els patrons són a la base de tots dos mòduls: varietats i linearitzacions, així que resulta del tot natural estudiar-ne la interrelació en termes d'un nou sistema de classificació algebraica de classes de llenguatges. Destaquem les principals contribucions d'aquesta tesi. Pel que fa a la matemàtica lingüística, la gramàtica de dependències algebraica considera els arbres i l'ordre de les paraules diferents mòduls dins l'arquitectura la qual cosa permet de descriure llenguatges amb una gran varietat d'ordre. L'ús d'el·lipsis és permès; aquesta qüestió és normalment evitada en altres formalismes per tal com la possibilitat d'el·lipsis fa que els models es tornin no decidibles. El nostre model també ens permet classificar estructuralment fenòmens lingüístics segons la seva descripció algebraica, així com de copsar afinitats entre construccions que semblen superficialment diferents. Pel que fa a la teoria dels llenguatges formals, presentem un nou sistema de classificació que ens permet d'entendre els llenguatges en un context més ampli. Identifiquem una nova classe que anomenem llenguatges anti-lliures-de-context que conté construccions estructuralment simètriques als llenguatges lliures de context. Informalment podríem dir que els llenguatges lliures de context estan ben parentetitzats, mentre que els anti-lliures-de-context estan parentetitzats segons dependències creuades en sèrie. En són mostres d'aquesta classe els llenguatges còpia i els llenguatges respectivament.Postprint (published version

    Linear superposition as a core theorem of quantum empiricism

    Get PDF
    Clarifying the nature of the quantum state Ψ|\Psi\rangle is at the root of the problems with insight into (counterintuitive) quantum postulates. We provide a direct-and math-axiom free-empirical derivation of this object as an element of a vector space. Establishing the linearity of this structure-quantum superposition-is based on a set-theoretic creation of ensemble formations and invokes the following three principia: (I)(\textsf{I}) quantum statics, (II)(\textsf{II}) doctrine of a number in the physical theory, and (III)(\textsf{III}) mathematization of matching the two observations with each other; quantum invariance. All of the constructs rest upon a formalization of the minimal experimental entity: observed micro-event, detector click. This is sufficient for producing the C\mathbb C-numbers, axioms of linear vector space (superposition principle), statistical mixtures of states, eigenstates and their spectra, and non-commutativity of observables. No use is required of the concept of time. As a result, the foundations of theory are liberated to a significant extent from the issues associated with physical interpretations, philosophical exegeses, and mathematical reconstruction of the entire quantum edifice.Comment: No figures. 64 pages; 68 pages(+4), overall substantial improvements; 70 pages(+2), further improvement

    Proceedings of the Conference on Natural Language Processing 2010

    Get PDF
    This book contains state-of-the-art contributions to the 10th conference on Natural Language Processing, KONVENS 2010 (Konferenz zur Verarbeitung natürlicher Sprache), with a focus on semantic processing. The KONVENS in general aims at offering a broad perspective on current research and developments within the interdisciplinary field of natural language processing. The central theme draws specific attention towards addressing linguistic aspects ofmeaning, covering deep as well as shallow approaches to semantic processing. The contributions address both knowledgebased and data-driven methods for modelling and acquiring semantic information, and discuss the role of semantic information in applications of language technology. The articles demonstrate the importance of semantic processing, and present novel and creative approaches to natural language processing in general. Some contributions put their focus on developing and improving NLP systems for tasks like Named Entity Recognition or Word Sense Disambiguation, or focus on semantic knowledge acquisition and exploitation with respect to collaboratively built ressources, or harvesting semantic information in virtual games. Others are set within the context of real-world applications, such as Authoring Aids, Text Summarisation and Information Retrieval. The collection highlights the importance of semantic processing for different areas and applications in Natural Language Processing, and provides the reader with an overview of current research in this field

    Finding structure in language

    Get PDF
    Since the Chomskian revolution, it has become apparent that natural language is richly structured, being naturally represented hierarchically, and requiring complex context sensitive rules to define regularities over these representations. It is widely assumed that the richness of the posited structure has strong nativist implications for mechanisms which might learn natural language, since it seemed unlikely that such structures could be derived directly from the observation of linguistic data (Chomsky 1965).This thesis investigates the hypothesis that simple statistics of a large, noisy, unlabelled corpus of natural language can be exploited to discover some of the structure which exists in natural language automatically. The strategy is to initially assume no knowledge of the structures present in natural language, save that they might be found by analysing statistical regularities which pertain between a word and the words which typically surround it in the corpus.To achieve this, various statistical methods are applied to define similarity between statistical distributions, and to infer a structure for a domain given knowledge of the similarities which pertain within it. Using these tools, it is shown that it is possible to form a hierarchical classification of many domains, including words in natural language. When this is done, it is shown that all the major syntactic categories can be obtained, and the classification is both relatively complete, and very much in accord with a standard linguistic conception of how words are classified in natural language.Once this has been done, the categorisation derived is used as the basis of a similar classification of short sequences of words. If these are analysed in a similar way, then several syntactic categories can be derived. These include simple noun phrases, various tensed forms of verbs, and simple prepositional phrases. Once this has been done, the same technique can be applied one level higher, and at this level simple sentences and verb phrases, as well as more complicated noun phrases and prepositional phrases, are shown to be derivable

    Distributional Semantic Models of Attribute Meaning in Adjectives and Nouns

    Get PDF
    Hartung M. Distributional Semantic Models of Attribute Meaning in Adjectives and Nouns. Heidelberg: Universität Heidelberg; 2015
    corecore