9 research outputs found

    Analogical Mapping with Sparse Distributed Memory: A Simple Model that Learns to Generalize from Examples

    Get PDF
    Abstract We present a computational model for the analogical mapping of compositional structures that combines two existing ideas known as holistic mapping vectors and sparse distributed memory. The model enables integration of structural and semantic constraints when learning mappings of the type x i ! y i and computing analogies x j ! y j for novel inputs x j . The model has a one-shot learning process, is randomly initialized, and has three exogenous parameters: the dimensionality D of representations, the memory size S, and the probability v for activation of the memory. After learning three examples, the model generalizes correctly to novel examples. We find minima in the probability of generalization error for certain values of v, S, and the number of different mapping examples learned. These results indicate that the optimal size of the memory scales with the number of different mapping examples learned and that the sparseness of the memory is important. The optimal dimensionality of binary representations is of the order 10 4 , which is consistent with a known analytical estimate and the synapse count for most cortical neurons. We demonstrate that the model can learn analogical mappings of generic two-place relationships, and we calculate the error probabilities for recall and generalization

    Incremental dimension reduction of tensors with random index

    Get PDF
    We present an incremental, scalable and efficient dimension reduction technique for tensors that is based on sparse random linear coding. Data is stored in a compactified representation with fixed size, which makes memory requirements low and predictable. Component encoding and decoding are performed on-line without computationally expensive re-analysis of the data set. The range of tensor indices can be extended dynamically without modifying the component representation. This idea originates from a mathematical model of semantic memory and a method known as random indexing in natural language processing. We generalize the random-indexing algorithm to tensors and present signal-to-noise-ratio simulations for representations of vectors and matrices. We present also a mathematical analysis of the approximate orthogonality of high-dimensional ternary vectors, which is a property that underpins this and other similar random-coding approaches to dimension reduction. To further demonstrate the properties of random indexing we present results of a synonym identification task. The method presented here has some similarities with random projection and Tucker decomposition, but it performs well at high dimensionality only (n>10^3). Random indexing is useful for a range of complex practical problems, e.g., in natural language processing, data mining, pattern recognition, event detection, graph searching and search engines. Prototype software is provided. It supports encoding and decoding of tensors of order >= 1 in a unified framework, i.e., vectors, matrices and higher order tensors.Comment: 36 pages, 9 figure

    Simple principles of cognitive computation with distributed representations

    No full text
    Brains and computers represent and process sensory information in different ways. Bridgingthat gap is essential for managing and exploiting the deluge of unprocessed andcomplex data in modern information systems. The development of brain-like computersthat learn from experience and process information in a non-numeric cognitive way willopen up new possibilities in the design and operation of both sensor and informationcommunication systems.This thesis presents a set of simple computational principles with cognitive qualities,which can enable computers to learn interesting relationships in large amounts of datastreaming from complex and changing real-world environments. More specifically, thiswork focuses on the construction of a computational model for analogical mapping andthe development of a method for semantic analysis with high-dimensional arrays.A key function of cognitive systems is the ability to make analogies. A computationalmodel of analogical mapping that learns to generalize from experience is presented in thisthesis. This model is based on high-dimensional random distributed representations anda sparse distributed associative memory. The model has a one-shot learning process andan ability to recall distinct mappings. After learning a few similar mapping examplesthe model generalizes and performs analogical mapping of novel inputs. As a majorimprovement over related models, the proposed model uses associative memory to learnmultiple analogical mappings in a coherent way.Random Indexing (RI) is a brain-inspired dimension reduction method that was developedfor natural language processing to identify semantic relationships in text. Ageneralized mathematical formulation of RI is presented, which enables N-way RandomIndexing (NRI) of multidimensional arrays. NRI is an approximate, incremental, scalable,and lightweight dimension reduction method for large non-sparse arrays. In addition, itprovides low and predictable storage requirements, and also enables the range of arrayindices to be further extended without modification of the data representation. Numericalsimulations of two-way and ordinary one-way RI are presented that illustrate whenthe approach is feasible. In conclusion, it is suggested that NRI can be used as a tool tomanage and exploit Big Data, for instance in data mining, information retrieval, socialnetwork analysis, and other machine learning applications.Godkänd; 2012; 20120510 (bleemr); LICENTIATSEMINARIUM Ämnesområde: Industriell Elektronik/Industrial Electronics Examinator: Professor Jerker Delsing, Institutionen för rymd- och systemteknik, Luleå tekniska universitet Diskutant: PhD, Senior Lecturer Serge Thill, Humanities and Informatics, University of Skövde Tid: Måndag den 11 juni 2012 kl 10.30 Plats: D770, Luleå tekniska universite

    Ubiquitous Cognitive Computing: A Vector Symbolic Approach

    No full text
    A wide range of physical things are currently being integrated with the infrastructure of cyberspace in a process that is creating the so-called Internet of Things. It is expected that Internet-connected devices will vastly outnumber people on the planet in the near future. Such devices need to be easily deployed and integrated, otherwise the resulting systems will be too costly to configure and maintain. This is challenging to accomplish using conventional technology, especially when dealing with complex or heterogeneous systems consisting of diverse components that implement functionality and standards in different ways. In addition, artificial systems that interact with humans, the environment and one-another need to deal with complex and imprecise information, which is difficult to represent in a flexible and standardized manner using conventional methods.This thesis investigates the use of cognitive computing principles that offer new ways to represent information and design such devices and systems. The core idea underpinning the work presented herein is that functioning systems can potentially emerge autonomously by learning from user interactions and the environment provided that each component of the system conforms to a set of general information-coding and communication rules.The proposed learning approach uses vector-based representations of information, which are common in models of cognition and semantic spaces. Vector symbolic architectures (VSAs) are a class of biology-inspired models that represent and manipulate structured representations of information, which can be used to model high-level cognitive processes such as analogy-making. Analogy-making is a central element of cognition that enables animals to identify and manage new information by generalizing past experiences, possibly from a few learned examples.The work presented herein is based on a VSA and a binary associative memory model known as sparse distributed memory. The thesis outlines a learning architecture for the automated configuration and interoperation of devices operating in heterogeneous and ubiquitous environments. To this end, the sparse distributed memory model is extended with a VSA-based analogy-making mechanism that enables generalization from a few learned examples, thereby facilitating rapid learning. The thesis also presents a generalization of random indexing, which is an incremental and lightweight feature extraction method for streaming data that is commonly used to generate vector representations of semantic spaces.The impact of this thesis is twofold. First, the appended papers extend previous theoretical and empirical work on vector-based cognitive models, in particular for analogy-making and learning. Second, a new approach for designing the next generation of ubiquitous cognitive systems is outlined, which in principle can enable heterogeneous devices and systems to autonomously learn how to interoperate.Godkänd; 2014; 20140827 (bleemr); Nedanstående person kommer att disputera för avläggande av teknologie doktorsexamen. Namn: Blerim Emruli Ämne: Industriell elektronik/Industrial Electronics Avhandling: Ubiquitous Cognitive Computing: A Vector Symbolic Approach Opponent: Professor Kai-Uwe Kühnberger, Institutet för kognitionsvetenskap, Osnabrück universitet, Tyskland Ordförande: Professor Jerker Delsing, EISLAB, Institutionen för system- och rymdteknik, Luleå tekniska universitet, Sverige Tid: Onsdag den 1 oktober 2014, kl 10.00 Plats: A109, Luleå tekniska universite

    Analogical mapping with sparse distributed memory : a simple model that learns to generalize from examples

    No full text
    We present a computational model for the analogical mapping of compositional structures that com- bines two existing ideas known as holistic mapping vec- tors and sparse distributed memory. The model enables integration of structural and semantic constraints when learning mappings of the type x_i → y_i and computing analogies x_j → y_j for novel inputs x_j. The model has a one-shot learning process, is randomly initialized and has three exogenous parameters: the dimensionality D of representations, the memory size S and the prob- ability χ for activation of the memory. After learning three examples the model generalizes correctly to novel examples. We find minima in the probability of generalization error for certain values of χ, S and the number of different mapping examples learned. These results indicate that the optimal size of the memory scales with the number of different mapping examples learned and that the sparseness of the memory is important. The optimal dimensionality of binary representations is of the order 10^4, which is consistent with a known analytical estimate and the synapse count for most cortical neurons. We demonstrate that the model can learn analogical mappings of generic two-place relationships and we calculate the error probabilities for recall and generalization.Validerad; 2014; 20130125 (bleemr

    Vector space architecture for emergent interoperability of systems by learning from demonstration

    No full text
    The rapid integration of physical systems with cyberspace infrastructure, the so-called Internet of Things, is likely to have a significant effect on how people interact with the physical environment and design information and communication systems. Internet-connected systems are expected to vastly outnumber people on the planet in the near future, leading to grand challenges in software engineering and automation in application domains involving complex and evolving systems. Several decades of artificial intelligence research suggests that conventional approaches to making such systems interoperable using handcrafted "semantic" descriptions of services and information are difficult to apply. In this paper we outline a bioinspired learning approach to creating interoperable systems, which does not require handcrafted semantic descriptions and rules. Instead, the idea is that a functioning system (of systems) can emerge from an initial pseudorandom state through learning from examples, provided that each component conforms to a set of information coding rules. We combine a binary vector symbolic architecture (VSA) with an associative memory known as sparse distributed memory (SDM) to model context-dependent prediction by learning from examples. We present simulation results demonstrating that the proposed architecture can enable system interoperability by learning, for example by human demonstration.Godkänd; 2015; 20140523 (bleemr

    Analogical mapping and inference with binary spatter codes and sparse distributed memory

    No full text
    Analogy-making is a key function of human cognition. Therefore, the development of computational models of analogy that automatically learn from examples can lead to significant advances in cognitive systems. Analogies require complex, relational representations of learned structures, which is challenging for both symbolic and neurally inspired models. Vector symbolic architectures (VSAs) are a class of connectionist models for the representation and manipulation of compositional structures, which can be used to model analogy. We study a novel VSA network for the analogical mapping of compositional structures, which integrates an associative memory known as sparse distributed memory (SDM). The SDM enables non-commutative binding of compositional structures, which makes it possible to predict novel patterns in sequences. To demonstrate this property we apply the network to a commonly used intelligence test called Raven’s Progressive Matrices. We present results of simulation experiments for the Raven’s task and calculate the probability of prediction error at 95% confidence level. We find that non-commutative binding requires sparse activation of the SDM and that 10–20% concept-specific activation of neurons is optimal. The optimal dimensionality of the binary distributed representations of the VSA is of the order 10^4, which is comparable with former results and the average synapse count of neurons in the cerebral cortex.Godkänd; 2013; 20130408 (bleemr

    Analogical mapping and inference with binary spatter codes and sparse distributed memory

    No full text
    Analogy-making is a key function of human cognition. Therefore, the development of computational models of analogy that automatically learn from examples can lead to significant advances in cognitive systems. Analogies require complex, relational representations of learned structures, which is challenging for both symbolic and neurally inspired models. Vector symbolic architectures (VSAs) are a class of connectionist models for the representation and manipulation of compositional structures, which can be used to model analogy. We study a novel VSA network for the analogical mapping of compositional structures, which integrates an associative memory known as sparse distributed memory (SDM). The SDM enables non-commutative binding of compositional structures, which makes it possible to predict novel patterns in sequences. To demonstrate this property we apply the network to a commonly used intelligence test called Raven’s Progressive Matrices. We present results of simulation experiments for the Raven’s task and calculate the probability of prediction error at 95% confidence level. We find that non-commutative binding requires sparse activation of the SDM and that 10–20% concept-specific activation of neurons is optimal. The optimal dimensionality of the binary distributed representations of the VSA is of the order 10^4, which is comparable with former results and the average synapse count of neurons in the cerebral cortex.Godkänd; 2013; 20130408 (bleemr

    What is an Open IoT Platform? Insights from a Systematic Mapping Study

    No full text
    Today, the Internet of Things (IoT) is mainly associated with vertically integrated systems that often are closed and fragmented in their applicability. To build a better IoT ecosystem, the open IoT platform has become a popular term in the recent years. However, this term is usually used in an intuitive way without clarifying the openness aspects of the platforms. The goal of this paper is to characterize the openness types of IoT platforms and investigate what makes them open. We conducted a systematic mapping study by retrieving data from 718 papers. As a result of applying the inclusion and exclusion criteria, 221 papers were selected for review. We discovered 46 IoT platforms that have been characterized as open, whereas 25 platforms are referred as open by some studies rather than the platforms themselves. We found that the most widely accepted and used open IoT platforms are NodeMCU and ThingSpeak that together hold a share of more than 70% of the declared open IoT platforms in the selected papers. The openness of an IoT platform is interpreted into different openness types. Our study results show that the most common openness type encountered in open IoT platforms is open-source, but also open standards, open APIs, open data and open layers are used in the literature. Finally, we propose a new perspective on how to define openness in the context of IoT platforms by providing several insights from the different stakeholder viewpoints
    corecore