13,560 research outputs found

    Some Logical Notations for Pragmatic Assertions

    Get PDF
    The pragmatic notion of assertion has an important inferential role in logic. There are also many notational forms to express assertions in logical systems. This paper reviews, compares and analyses languages with signs for assertions, including explicit signs such as Frege’s and Dalla Pozza’s logical systems and implicit signs with no specific sign for assertion, such as Peirce’s algebraic and graphical logics and the recent modification of the latter termed Assertive Graphs. We identify and discuss the main ‘points’ of these notations on the logical representation of assertions, and evaluate their systems from the perspective of the philosophy of logical notations. Pragmatic assertions turn out to be useful in providing intended interpretations of a variety of logical systems

    Linguistic content analysis of the Holt, Rinehart and Winston series of high school biology textbooks: a longitudinal study focusing on the use of inquiry

    Get PDF
    A content analysis was performed on one series of high school biology textbooks spanning a thirty-year time period from 1956 to 1985. Science textbooks were selected for the study because, in general, the textbook is the major curriculum guide in a science classroom. The subject of biology was selected because it is the science course most frequently taken by high school students, and for many students, it is their only high school science course;The textbook series selected for this longitudinal study was Modern Biology, published by Holt, Rinehart and Winston. The Holt series was selected because it has been one of the best selling high school biology textbooks since the 1930s;The specific aspect of the content analyzed was the presentation of science as inquiry. The concept of science as inquiry has been a major goal in science education for several years. This goal received increased attention during the curriculum movements of the 1960s. It was of interest to see if this theme was in evidence in one of the best selling high school biology textbook series. A four-by-three factorial design was used, with four years of publication crossing three subject areas. The four years of publication, representing roughly 10-year time intervals, were 1956, 1965, 1977 and 1985. To represent contrasts in anticipated levels of inquiry, the introductory, genetic and leaf structure chapters were analyzed. Level of inquiry was measured using linguistic content analysis, and the resulting categorical data were analyzed using logistic regression modeling. The linguistic content analysis method, applied in this study on an analysis of level of inquiry, could also be used to measure other aspects of textbooks;The study indicated that level of inquiry varied across the four editions studied. In particular, level of inquiry was higher in the 1965 and the 1977 editions, and lower in the 1957 and 1985 editions. Level of inquiry was highest in the introduction chapters and lowest in the leaf structure chapters. Level of inquiry was highest at the beginning of chapters, at the beginning of paragraphs, and in sentences which did not contain technical words. Level of inquiry also exhibited three significant two-way interactions with the above listed main effect terms

    QNRs: toward language for intelligent machines

    Get PDF
    Impoverished syntax and nondifferentiable vocabularies make natural language a poor medium for neural representation learning and applications. Learned, quasilinguistic neural representations (QNRs) can upgrade words to embeddings and syntax to graphs to provide a more expressive and computationally tractable medium. Graph-structured, embedding-based quasilinguistic representations can support formal and informal reasoning, human and inter-agent communication, and the development of scalable quasilinguistic corpora with characteristics of both literatures and associative memory. To achieve human-like intellectual competence, machines must be fully literate, able not only to read and learn, but to write things worth retaining as contributions to collective knowledge. In support of this goal, QNR-based systems could translate and process natural language corpora to support the aggregation, refinement, integration, extension, and application of knowledge at scale. Incremental development of QNRbased models can build on current methods in neural machine learning, and as systems mature, could potentially complement or replace today’s opaque, error-prone “foundation models” with systems that are more capable, interpretable, and epistemically reliable. Potential applications and implications are broad

    Information extraction

    Get PDF
    In this paper we present a new approach to extract relevant information by knowledge graphs from natural language text. We give a multiple level model based on knowledge graphs for describing template information, and investigate the concept of partial structural parsing. Moreover, we point out that expansion of concepts plays an important role in thinking, so we study the expansion of knowledge graphs to use context information for reasoning and merging of templates
    corecore