5,563 research outputs found

    From Frequency to Meaning: Vector Space Models of Semantics

    Full text link
    Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field

    Apperceptive patterning: Artefaction, extensional beliefs and cognitive scaffolding

    Get PDF
    In “Psychopower and Ordinary Madness” my ambition, as it relates to Bernard Stiegler’s recent literature, was twofold: 1) critiquing Stiegler’s work on exosomatization and artefactual posthumanism—or, more specifically, nonhumanism—to problematize approaches to media archaeology that rely upon technical exteriorization; 2) challenging how Stiegler engages with Giuseppe Longo and Francis Bailly’s conception of negative entropy. These efforts were directed by a prevalent techno-cultural qualifier: the rise of Synthetic Intelligence (including neural nets, deep learning, predictive processing and Bayesian models of cognition). This paper continues this project but first directs a critical analytic lens at the Derridean practice of the ontologization of grammatization from which Stiegler emerges while also distinguishing how metalanguages operate in relation to object-oriented environmental interaction by way of inferentialism. Stalking continental (Kapp, Simondon, Leroi-Gourhan, etc.) and analytic traditions (e.g., Carnap, Chalmers, Clark, Sutton, Novaes, etc.), we move from artefacts to AI and Predictive Processing so as to link theories related to technicity with philosophy of mind. Simultaneously drawing forth Robert Brandom’s conceptualization of the roles that commitments play in retrospectively reconstructing the social experiences that lead to our endorsement(s) of norms, we compliment this account with Reza Negarestani’s deprivatized account of intelligence while analyzing the equipollent role between language and media (both digital and analog)

    A Method for Knowledge Representation to Design Intelligent Problems Solver in Mathematics Based on Rela-Ops Model

    Get PDF
    Knowledge-base is a fundamental platform in the architecture of an intelligent system. Relations and operators are popular knowledge in practice knowledge domains. In this paper, we propose a method to represent the model by combining these kinds of knowledge, called the Rela-Ops model. This model includes foundation components consisting of concepts, relations, operators, and inference rules. It is built based on ontology and object-oriented approaches. Besides the structure, each concept of the Rela-Ops model is a class of objects which also have behaviors to solve problems on their own. The processing of algorithms for solving problems on the Rela-Ops model combines the knowledge of relations and operators in the reasoning. Furthermore, we also propose a knowledge model for multiple knowledge domains, in which each sub-domain has the form as the Rela-Ops model. These representation methods have been applied to build knowledge bases of Intelligent Problems Solver (IPS) in mathematics. The knowledge base of 2D-Analytical Geometry in a high-school is built by using the Rela-Ops model, and the knowledge base of Linear Algebra in university is designed by using the model for multiple knowledge domains. The IPS system can automatically solve basic and advanced exercises in respective courses. The reasoning of their solutions is done in a step-by-step approach. It is similar to the solving method by humans. The solutions are also pedagogical and suitable for the learner’s level and easy to be used by students studying 2D-Analytical Geometry in high-school and Linear Algebra in university.This work was supported in part by the Universiti Teknologi Malaysia (UTM) under Research University Grant Vot-20H04, in part by the Malaysia Research University Network (MRUN) Vot 4L876 and in part by the Fundamental Research Grant Scheme (FRGS) Vot 5F073 through the Ministry of Education Malaysia

    Modeling of Phenomena and Dynamic Logic of Phenomena

    Get PDF
    Modeling of complex phenomena such as the mind presents tremendous computational complexity challenges. Modeling field theory (MFT) addresses these challenges in a non-traditional way. The main idea behind MFT is to match levels of uncertainty of the model (also, problem or theory) with levels of uncertainty of the evaluation criterion used to identify that model. When a model becomes more certain, then the evaluation criterion is adjusted dynamically to match that change to the model. This process is called the Dynamic Logic of Phenomena (DLP) for model construction and it mimics processes of the mind and natural evolution. This paper provides a formal description of DLP by specifying its syntax, semantics, and reasoning system. We also outline links between DLP and other logical approaches. Computational complexity issues that motivate this work are presented using an example of polynomial models

    A Fuzzy Approach to the Synthesis of Cognitive Maps for Modeling Decision Making in Complex Systems

    Get PDF
    The object of this study is fuzzy cognitive modeling as a means of studying semistructured socio-economic systems. The features of constructing cognitive maps, providing the ability to choose management decisions in complex semistructured socio-economic systems, are described. It is shown that further improvement of technologies necessary for developing decision support systems and their practical use is still relevant. This work aimed to improve the accuracy of cognitive modeling of semistructured systems based on a fuzzy cognitive map of structuring nonformalized situations (MSNS) with the evaluation of root-mean-square error (RMSE) and mean average squared error (MASE) coefficients. In order to achieve the goal, the following main methods were used: systems analysis methods, fuzzy logic and fuzzy sets theory postulates, theory of integral wavelet transform, correlation and autocorrelation analyses. As a result, a new methodology for constructing MSNS was proposed—a map of structuring nonformalized situations that combines the positive properties of previous fuzzy cognitive maps. The solution of modeling problems based on this methodology should increase the reliability and quality of analysis and modeling of semistructured systems and processes under uncertainty. The analysis using open datasets proved that compared to the classical ARIMA, SVR, MLP, and Fuzzy time series models, our proposed model provides better performance in terms of MASE and RMSE metrics, which confirms its advantage. Thus, it is advisable to use our proposed algorithm in the future as a mathematical basis for developing software tools for the analysis and modeling of problems in semistructured systems and processes. Doi: 10.28991/ESJ-2022-06-02-012 Full Text: PD

    Analyzing collaborative learning processes automatically

    Get PDF
    In this article we describe the emerging area of text classification research focused on the problem of collaborative learning process analysis both from a broad perspective and more specifically in terms of a publicly available tool set called TagHelper tools. Analyzing the variety of pedagogically valuable facets of learners’ interactions is a time consuming and effortful process. Improving automated analyses of such highly valued processes of collaborative learning by adapting and applying recent text classification technologies would make it a less arduous task to obtain insights from corpus data. This endeavor also holds the potential for enabling substantially improved on-line instruction both by providing teachers and facilitators with reports about the groups they are moderating and by triggering context sensitive collaborative learning support on an as-needed basis. In this article, we report on an interdisciplinary research project, which has been investigating the effectiveness of applying text classification technology to a large CSCL corpus that has been analyzed by human coders using a theory-based multidimensional coding scheme. We report promising results and include an in-depth discussion of important issues such as reliability, validity, and efficiency that should be considered when deciding on the appropriateness of adopting a new technology such as TagHelper tools. One major technical contribution of this work is a demonstration that an important piece of the work towards making text classification technology effective for this purpose is designing and building linguistic pattern detectors, otherwise known as features, that can be extracted reliably from texts and that have high predictive power for the categories of discourse actions that the CSCL community is interested in

    Dagstuhl News January - December 2005

    Get PDF
    "Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic
    • …
    corecore