28,740 research outputs found
The SP theory of intelligence: benefits and applications
This article describes existing and expected benefits of the "SP theory of
intelligence", and some potential applications. The theory aims to simplify and
integrate ideas across artificial intelligence, mainstream computing, and human
perception and cognition, with information compression as a unifying theme. It
combines conceptual simplicity with descriptive and explanatory power across
several areas of computing and cognition. In the "SP machine" -- an expression
of the SP theory which is currently realized in the form of a computer model --
there is potential for an overall simplification of computing systems,
including software. The SP theory promises deeper insights and better solutions
in several areas of application including, most notably, unsupervised learning,
natural language processing, autonomous robots, computer vision, intelligent
databases, software engineering, information compression, medical diagnosis and
big data. There is also potential in areas such as the semantic web,
bioinformatics, structuring of documents, the detection of computer viruses,
data fusion, new kinds of computer, and the development of scientific theories.
The theory promises seamless integration of structures and functions within and
between different areas of application. The potential value, worldwide, of
these benefits and applications is at least $190 billion each year. Further
development would be facilitated by the creation of a high-parallel,
open-source version of the SP machine, available to researchers everywhere.Comment: arXiv admin note: substantial text overlap with arXiv:1212.022
A Deep Neural Network for Finger Counting and Numerosity Estimation
In this paper, we present neuro-robotics models with
a deep artificial neural network capable of generating finger
counting positions and number estimation. We first train the
model in an unsupervised manner where each layer is treated
as a Restricted Boltzmann Machine or an autoencoder. Such a
model is further trained in a supervised way. This type of pretraining is tested on our baseline model and two methods of
pre-training are compared. The network is extended to produce
finger counting positions. The performance in number estimation
of such an extended model is evaluated. We test the hypothesis if
the subitizing process can be obtained by one single model used
also for estimation of higher numerosities. The results confirm
the importance of unsupervised training in our enumeration task
and show some similarities to human behaviour in the case of
subitizing
Cognition-Based Networks: A New Perspective on Network Optimization Using Learning and Distributed Intelligence
IEEE Access
Volume 3, 2015, Article number 7217798, Pages 1512-1530
Open Access
Cognition-based networks: A new perspective on network optimization using learning and distributed intelligence (Article)
Zorzi, M.a , Zanella, A.a, Testolin, A.b, De Filippo De Grazia, M.b, Zorzi, M.bc
a Department of Information Engineering, University of Padua, Padua, Italy
b Department of General Psychology, University of Padua, Padua, Italy
c IRCCS San Camillo Foundation, Venice-Lido, Italy
View additional affiliations
View references (107)
Abstract
In response to the new challenges in the design and operation of communication networks, and taking inspiration from how living beings deal with complexity and scalability, in this paper we introduce an innovative system concept called COgnition-BAsed NETworkS (COBANETS). The proposed approach develops around the systematic application of advanced machine learning techniques and, in particular, unsupervised deep learning and probabilistic generative models for system-wide learning, modeling, optimization, and data representation. Moreover, in COBANETS, we propose to combine this learning architecture with the emerging network virtualization paradigms, which make it possible to actuate automatic optimization and reconfiguration strategies at the system level, thus fully unleashing the potential of the learning approach. Compared with the past and current research efforts in this area, the technical approach outlined in this paper is deeply interdisciplinary and more comprehensive, calling for the synergic combination of expertise of computer scientists, communications and networking engineers, and cognitive scientists, with the ultimate aim of breaking new ground through a profound rethinking of how the modern understanding of cognition can be used in the management and optimization of telecommunication network
Big data and the SP theory of intelligence
This article is about how the "SP theory of intelligence" and its realisation
in the "SP machine" may, with advantage, be applied to the management and
analysis of big data. The SP system -- introduced in the article and fully
described elsewhere -- may help to overcome the problem of variety in big data:
it has potential as "a universal framework for the representation and
processing of diverse kinds of knowledge" (UFK), helping to reduce the
diversity of formalisms and formats for knowledge and the different ways in
which they are processed. It has strengths in the unsupervised learning or
discovery of structure in data, in pattern recognition, in the parsing and
production of natural language, in several kinds of reasoning, and more. It
lends itself to the analysis of streaming data, helping to overcome the problem
of velocity in big data. Central in the workings of the system is lossless
compression of information: making big data smaller and reducing problems of
storage and management. There is potential for substantial economies in the
transmission of data, for big cuts in the use of energy in computing, for
faster processing, and for smaller and lighter computers. The system provides a
handle on the problem of veracity in big data, with potential to assist in the
management of errors and uncertainties in data. It lends itself to the
visualisation of knowledge structures and inferential processes. A
high-parallel, open-source version of the SP machine would provide a means for
researchers everywhere to explore what can be done with the system and to
create new versions of it.Comment: Accepted for publication in IEEE Acces
Predicting Category Intuitiveness With the Rational Model, the Simplicity Model, and the Generalized Context Model
Naïve observers typically perceive some groupings for a set of stimuli as more intuitive than others. The problem of predicting category intuitiveness has been historically considered the remit of models of unsupervised categorization. In contrast, this article develops a measure of category intuitiveness from one of the most widely supported models of supervised categorization, the generalized context model (GCM). Considering different category assignments for a set of instances, the authors asked how well the GCM can predict the classification of each instance on the basis of all the other instances. The category assignment that results in the smallest prediction error is interpreted as the most intuitive for the GCM—the authors refer to this way of applying the GCM as “unsupervised GCM.” The authors systematically compared predictions of category intuitiveness from the unsupervised GCM and two models of unsupervised categorization: the simplicity model and the rational model. The unsupervised GCM compared favorably with the simplicity model and the rational model. This success of the unsupervised GCM illustrates that the distinction between supervised and unsupervised categorization may need to be reconsidered. However, no model emerged as clearly superior, indicating that there is more work to be done in understanding and modeling category intuitiveness
- …
