7,081 research outputs found

    Frequency Value Grammar and Information Theory

    Get PDF
    I previously laid the groundwork for Frequency Value Grammar (FVG) in papers I submitted in the proceedings of the 4th International Conference on Cognitive Science (2003), Sydney Australia, and Corpus Linguistics Conference (2003), Lancaster, UK. FVG is a formal syntax theoretically based in large part on Information Theory principles. FVG relies on dynamic physical principles external to the corpus which shape and mould the corpus whereas generative grammar and other formal syntactic theories are based exclusively on patterns (fractals) found occurring within the well-formed portion of the corpus. However, FVG should not be confused with Probability Syntax, (PS), as described by Manning (2003). PS is a corpus based approach that will yield the probability distribution of possible syntax constructions over a fixed corpus. PS makes no distinction between well and ill formed sentence constructions and assumes everything found in the corpus is well formed. In contrast, FVG’s primary objective is to distinguish between well and ill formed sentence constructions and, in so doing, relies on corpus based parameters which determine sentence competency. In PS, a syntax of high probability will not necessarily yield a well formed sentence. However, in FVG, a syntax or sentence construction of high ‘frequency value’ will yield a well-formed sentence, at least, 95% of the time satisfying most empirical standards. Moreover, in FVG, a sentence construction of ‘high frequency value’ could very well be represented by an underlying syntactic construction of low probability as determined by PS. The characteristic ‘frequency values’ calculated in FVG are not measures of probability but rather are fundamentally determined values derived from exogenous principles which impact and determine corpus based parameters serving as an index of sentence competency. The theoretical framework of FVG has broad applications beyond that of formal syntax and NLP. In this paper, I will demonstrate how FVG can be used as a model for improving the upper bound calculation of entropy of written English. Generally speaking, when a function word precedes an open class word, the backward n-gram analysis will be homomorphic with the information source and will result in frequency values more representative of co-occurrences in the information source

    Scale in literature: with reference to the New Testament and other texts in English and Greek

    Get PDF
    This dissertation explores "scale" in literature in general, and in the New Testament epistles in particular. All creative activity has its locus at an appropriate point within a wide scale spectrum: literature is no exception. This became apparent in 1965 when scale relationships were observed by the author in cumulative sum graphs of the Pauline epistles. Such scale differences are familiar to architects who use scale as a creative tool, but a wide search through standard reference books, surveys of work on statistical stylometry, linguistics and Biblical studies failed to provide any evidence that scholars were aware of scale in literature.Further investigation revealed that scale differences were to be found in many fields of creativity, in architecture, art, photography, music and engineering. Also explored was an interesting parallel found in the multi-layered scaling associated with the mathematics of chaos.To provide a broader perspective through which to view the Pauline epistles, 80 works by six modern authors and the writings of three ancient Greek authors were selected as test material. Graphs were prepared showing the sentence sequences and distributions of these works comprising over 400,000 words, and scale differences were found, not only between works, but also between sections of individual works. These were related to differences in genre, and this raised serious questions concerning the statistical homogeneity of samples containing scale differences. Care was taken to relate patterns directly to the content of the text and to the findings of Biblical scholarship.Links with theology revealed that the sense of the numinous presence, and the sense of the sublime in art, were on occasion directly reflected in sentence length. Human moods and feelings were found to have unpredictable but measurable manifestations in terms of scale in literature.The Pauline epistles revealed a common scaling structure of varying degrees of complexity, and a mathematical model was devised to demonstrate that major parts of all thirteen epistles share similar unusual scaling features. Significant patterns of a different kind were also found covering the texts of Hebrews and substantial portions of 1 and 2 Peter. It is submitted that these patterns provide new hard evidence which must be considered together with the evidence from other sources in arriving at conclusions concerning the authorship of the New Testament epistles

    Rational series and asymptotic expansion for linear homogeneous divide-and-conquer recurrences

    Full text link
    Among all sequences that satisfy a divide-and-conquer recurrence, the sequences that are rational with respect to a numeration system are certainly the most immediate and most essential. Nevertheless, until recently they have not been studied from the asymptotic standpoint. We show how a mechanical process permits to compute their asymptotic expansion. It is based on linear algebra, with Jordan normal form, joint spectral radius, and dilation equations. The method is compared with the analytic number theory approach, based on Dirichlet series and residues, and new ways to compute the Fourier series of the periodic functions involved in the expansion are developed. The article comes with an extended bibliography

    Engineering polymer informatics: Towards the computer-aided design of polymers

    Get PDF
    The computer-aided design of polymers is one of the holy grails of modern chemical informatics and of significant interest for a number of communities in polymer science. The paper outlines a vision for the in silico design of polymers and presents an information model for polymers based on modern semantic web technologies, thus laying the foundations for achieving the vision

    Probabilistic logic as a unified framework for inference

    Full text link
    I argue that a probabilistic logical language incorporates all the features of deductive, inductive, and abductive inference with the exception of how to generate hypotheses ex nihilo. In the context of abduction, it leads to the Bayes theorem for confirming hypotheses, and naturally captures the theoretical virtue of quantitative parsimony. I address common criticisms against this approach, including how to assign probabilities to sentences, the problem of the catch-all hypothesis, and the problem of auxiliary hypotheses. Finally, I make a tentative argument that mathematical deduction fits in the same probabilistic framework as a deterministic limiting case

    Teaching Economics As a Science: The 1930 Yale Lectures of Ragnar Frisch

    Get PDF
    This paper is prepared for the forthcoming publication of Frisch's 1930 Yale lecture notes, A Dynamic Approach to Economic Theory: The Yale Lectures of Ragnar Frisch (details at: http://www.routledgeeconomics.com/books/A-Dynamic-Approach-to-Economic-Theory-isbn9780415564090). As the lecture series was given just as the Econometric Society was founded in 1930. We provide as background, a blow-by-blow story of how the Econometric Society got founded with emphasis on Frisch's role. We then outline how the Yale lecture notes came into being, closely connected to Frisch's econometric work at the time. We comment upon the lectures, relating them to Frisch's later works and, more important, to subsequent developments in economics and econometrics.History of econometrics

    Agents for educational games and simulations

    Get PDF
    This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications

    A game-based approach towards human augmented image annotation.

    Get PDF
    PhDImage annotation is a difficult task to achieve in an automated way. In this thesis, a human-augmented approach to tackle this problem is discussed and suitable strategies are derived to solve it. The proposed technique is inspired by human-based computation in what is called “human-augmented” processing to overcome limitations of fully automated technology for closing the semantic gap. The approach aims to exploit what millions of individual gamers are keen to do, i.e. enjoy computer games, while annotating media. In this thesis, the image annotation problem is tackled by a game based framework. This approach combines image processing and a game theoretic model to gather media annotations. Although the proposed model behaves similar to a single player game model, the underlying approach has been designed based on a two-player model which exploits the player’s contribution to the game and previously recorded players to improve annotations accuracy. In addition, the proposed framework is designed to predict the player’s intention through Markovian and Sequential Sampling inferences in order to detect cheating and improve annotation performances. Finally, the proposed techniques are comprehensively evaluated with three different image datasets and selected representative results are reported
    • 

    corecore