17,952 research outputs found
Vector-based Approach to Verbal Cognition
Human verbal thinking is an object of many multidisciplinary studies Verbal cognition is often an integration of complex mental activities such as neurocognitive and psychological processes In neuro-cognitive study of language neural architecture and neuropsychological mechanism of verbal cognition are basis of a vector based modeling Human mental states as constituents of mental continuum represent an infinite set of meanings Number of meanings is not limited but numbers of words and rules that are used for building complex verbal structures are limited Verbal perception and interpretation of the multiple meanings and propositions in mental continuum can be modeled by applying tensor methods A comparison of human mental space to a vector space is an effective way of analyzing of human semantic vocabulary mental representations and rules of clustering and mapping As such Euclidean and non-Euclidean spaces can be applied for a description of human semantic vocabulary and high order Additionally changes in semantics and structures can be analyzed in 3D and other dimensional spaces It is suggested that different forms of verbal representation should be analyzed in a light of vector tensor transformations Vector dot and cross product covariance and contra variance have been applied to analysis of semantic transformations and pragmatic change in high order syntax structures These ideas are supported by empirical data from typologically different languages such as Mongolian English and Russian Moreover the author argues that the vectorbased approach to cognitive linguistics offers new opportunities to develop an alternative version of quantitative semantics and thus to extend theory of Universal grammar in new dimension
The Knowledge Level in Cognitive Architectures: Current Limitations and Possible Developments
In this paper we identify and characterize an analysis of two problematic aspects affecting the representational level of cognitive architectures (CAs), namely: the limited size and the homogeneous typology of the encoded and processed knowledge.
We argue that such aspects may constitute not only a technological problem that, in our opinion, should be addressed in order to build articial agents able to exhibit intelligent behaviours in general scenarios, but also an epistemological one, since they limit the plausibility of the comparison of the CAs' knowledge representation and processing mechanisms with those executed by humans in their everyday activities. In the final part of the paper further directions of research will be explored, trying to address current limitations and
future challenges
Platonic model of mind as an approximation to neurodynamics
Hierarchy of approximations involved in simplification of microscopic theories, from sub-cellural to the whole brain level, is presented. A new approximation to neural dynamics is described, leading to a Platonic-like model of mind based on psychological spaces. Objects and events in these spaces correspond to quasi-stable states of brain dynamics and may be interpreted from psychological point of view. Platonic model bridges the gap between neurosciences and psychological sciences. Static and dynamic versions of this model are outlined and Feature Space Mapping, a neurofuzzy realization of the static version of Platonic model, described. Categorization experiments with human subjects are analyzed from the neurodynamical and Platonic model points of view
A Quantum Many-body Wave Function Inspired Language Modeling Approach
The recently proposed quantum language model (QLM) aimed at a principled
approach to modeling term dependency by applying the quantum probability
theory. The latest development for a more effective QLM has adopted word
embeddings as a kind of global dependency information and integrated the
quantum-inspired idea in a neural network architecture. While these
quantum-inspired LMs are theoretically more general and also practically
effective, they have two major limitations. First, they have not taken into
account the interaction among words with multiple meanings, which is common and
important in understanding natural language text. Second, the integration of
the quantum-inspired LM with the neural network was mainly for effective
training of parameters, yet lacking a theoretical foundation accounting for
such integration. To address these two issues, in this paper, we propose a
Quantum Many-body Wave Function (QMWF) inspired language modeling approach. The
QMWF inspired LM can adopt the tensor product to model the aforesaid
interaction among words. It also enables us to reveal the inherent necessity of
using Convolutional Neural Network (CNN) in QMWF language modeling.
Furthermore, our approach delivers a simple algorithm to represent and match
text/sentence pairs. Systematic evaluation shows the effectiveness of the
proposed QMWF-LM algorithm, in comparison with the state of the art
quantum-inspired LMs and a couple of CNN-based methods, on three typical
Question Answering (QA) datasets.Comment: 10 pages,4 figures,CIK
A Minimal Architecture for General Cognition
A minimalistic cognitive architecture called MANIC is presented. The MANIC
architecture requires only three function approximating models, and one state
machine. Even with so few major components, it is theoretically sufficient to
achieve functional equivalence with all other cognitive architectures, and can
be practically trained. Instead of seeking to transfer architectural
inspiration from biology into artificial intelligence, MANIC seeks to minimize
novelty and follow the most well-established constructs that have evolved
within various sub-fields of data science. From this perspective, MANIC offers
an alternate approach to a long-standing objective of artificial intelligence.
This paper provides a theoretical analysis of the MANIC architecture.Comment: 8 pages, 8 figures, conference, Proceedings of the 2015 International
Joint Conference on Neural Network
Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure
Big data research has attracted great attention in science, technology,
industry and society. It is developing with the evolving scientific paradigm,
the fourth industrial revolution, and the transformational innovation of
technologies. However, its nature and fundamental challenge have not been
recognized, and its own methodology has not been formed. This paper explores
and answers the following questions: What is big data? What are the basic
methods for representing, managing and analyzing big data? What is the
relationship between big data and knowledge? Can we find a mapping from big
data into knowledge space? What kind of infrastructure is required to support
not only big data management and analysis but also knowledge discovery, sharing
and management? What is the relationship between big data and science paradigm?
What is the nature and fundamental challenge of big data computing? A
multi-dimensional perspective is presented toward a methodology of big data
computing.Comment: 59 page
Geometric representations for minimalist grammars
We reformulate minimalist grammars as partial functions on term algebras for
strings and trees. Using filler/role bindings and tensor product
representations, we construct homomorphisms for these data structures into
geometric vector spaces. We prove that the structure-building functions as well
as simple processors for minimalist languages can be realized by piecewise
linear operators in representation space. We also propose harmony, i.e. the
distance of an intermediate processing step from the final well-formed state in
representation space, as a measure of processing complexity. Finally, we
illustrate our findings by means of two particular arithmetic and fractal
representations.Comment: 43 pages, 4 figure
- …