251 research outputs found

    Emergent intentionality in perception-action subsumption hierarchies

    Get PDF
    A cognitively-autonomous artificial agent may be defined as one able to modify both its external world-model and the framework by which it represents the world, requiring two simultaneous optimization objectives. This presents deep epistemological issues centered on the question of how a framework for representation (as opposed to the entities it represents) may be objectively validated. In this summary paper, formalizing previous work in this field, it is argued that subsumptive perception-action learning has the capacity to resolve these issues by {\em a)} building the perceptual hierarchy from the bottom up so as to ground all proposed representations and {\em b)} maintaining a bijective coupling between proposed percepts and projected action possibilities to ensure empirical falsifiability of these grounded representations. In doing so, we will show that such subsumptive perception-action learners intrinsically incorporate a model for how intentionality emerges from randomized exploratory activity in the form of 'motor babbling'. Moreover, such a model of intentionality also naturally translates into a model for human-computer interfacing that makes minimal assumptions as to cognitive states

    Revisiting direct neuralisation of first-order logic

    Get PDF
    There is a long history of direct translation of propositional Horn-clause logic programs into neural networks. The possibility of translating first-order logical syntax in the same way has been largely overlooked, perhaps due to a “propositional fixation” fixation! We briefly revise the possibility and advantage of translating existentially and universally quantified clauses into a neural form that follows the first-order syntax in a natural wa

    A comprehensive classification of deep learning libraries

    Get PDF
    Deep Learning (DL) networks are composed of multiple processing layers that learn data representations with multiple levels of abstraction. In recent years, DL networks have significantly improved the state-of-the-art across different domains, including speech processing, text mining, pattern recognition, object detection, robotics and big data analytics. Generally, a researcher or practitioner who is planning to use DL networks for the first time faces difficulties in selecting suitable software tools. The present article provides a comprehensive list and taxonomy of current programming languages and software tools that can be utilized for implementation of DL networks. The motivation of this article is hence to create awareness among researchers, especially beginners, regarding the various languages and interfaces that are available to implement deep learning, and to provide a simplified ontological basis for selecting between them

    A kernel-based framework for medical big-data analytics

    Get PDF
    The recent trend towards standardization of Electronic Health Records (EHRs) represents a significant opportunity and challenge for medical big-data analytics. The challenge typically arises from the nature of the data which may be heterogeneous, sparse, very high-dimensional, incomplete and inaccurate. Of these, standard pattern recognition methods can typically address issues of high-dimensionality, sparsity and inaccuracy. The remaining issues of incompleteness and heterogeneity however are problematic; data can be as diverse as handwritten notes, blood-pressure readings and MR scans, and typically very little of this data will be co-present for each patient at any given time interval. We therefore advocate a kernel-based framework as being most appropriate for handling these issues, using the neutral point substitution method to accommodate missing inter-modal data. For pre-processing of image-based MR data we advocate a Deep Learning solution for contextual areal segmentation, with edit-distance based kernel measurement then used to characterize relevant morphology

    A generative adversarial network for single and multi-hop distributional knowledge base completion

    Get PDF
    Knowledge bases (KBs) inherently lack reasoning ability, limiting their effectiveness for tasks such as question-answering and query expansion. Machine-learning is hence commonly employed for representation learning in order to learn semantic features useful for generalization. Most existing methods utilize discriminative models that require both positive and negative samples to learn a decision boundary. KBs, by contrast, contain only positive samples, necessitating that negative samples are generated by replacing the head/tail of predicates with randomly-chosen entities. They are thus frequently easily discriminable from positive samples, which can prevent learning of sufficiently robust classifiers. Generative models, however, do not require negative samples to learn the distribution of positive samples; stimulated by recent developments in Generative Adversarial Networks (GANs), we propose a novel framework, Knowledge Completion GANs (KCGANs), for competitively training generative link prediction models against discriminative belief prediction models. KCGAN thus invokes a game between generator-network G and discriminator-networkD in which G aims to understand underlying KB structure by learning to perform link prediction while D tries to gain knowledge about the KB by learning predicate/triplet classification. Two key challenges are addressed: 1) Classical GAN architectures’ inability to easily generate samples over discrete entities; 2) the inefficiency of softmax for learning distributions over large sets of entities. As a step toward full first-order logical reasoning we further extend KCGAN to learn multi-hop logical entailment relations between entities by enabling G to compose a multi-hop relational path between entities and D to discriminate between real and fake paths. KCGAN is tested on benchmarks WordNet and FreeBase datasets and evaluated on link prediction and belief prediction tasks using MRR and HIT@10, achieving best-in-class performance

    Quantum Bootstrap Aggregation

    Get PDF
    We set out a strategy for quantizing attribute bootstrap aggregation to enable variance-resilient quantum machine learning. To do so, we utilise the linear decomposability of decision boundary parameters in the Rebentrost et al. Support Vector Machine to guarantee that stochastic measurement of the output quantum state will give rise to an ensemble decision without destroying the superposition over projective feature subsets induced within the chosen SVM implementation. We achieve a linear performance advantage, O(d), in addition to the existing O(log(n)) advantages of quantization as applied to Support Vector Machines. The approach extends to any form of quantum learning giving rise to linear decision boundaries

    Kernel combination via debiased object correspondence analysis

    Get PDF
    This paper addresses the problem of combining multi-modal kernels in situations in which object correspondence information is unavailable between modalities, for instance, where missing feature values exist, or when using proprietary databases in multi-modal biometrics. The method thus seeks to recover inter-modality kernel information so as to enable classifiers to be built within a composite embedding space. This is achieved through a principled group-wise identification of objects within differing modal kernel matrices in order to form a composite kernel matrix that retains the full freedom of linear kernel combination existing in multiple kernel learning. The underlying principle is derived from the notion of tomographic reconstruction, which has been applied successfully in conventional pattern recognition. In setting out this method, we aim to improve upon object-correspondence insensitive methods, such as kernel matrix combination via the Cartesian product of object sets to which the method defaults in the case of no discovered pairwise object identifications. We benchmark the method against the augmented kernel method, an order-insensitive approach derived from the direct sum of constituent kernel matrices, and also against straightforward additive kernel combination where the correspondence information is given a priori. We find that the proposed method gives rise to substantial performance improvements

    Representational fluidity in embodied (artificial) cognition

    Get PDF
    Theories of embodied cognition agree that the body plays some role in human cognition, but disagree on the precise nature of this role. While it is (together with the environment) fundamentally engrained in the so-called 4E (or multi-E) cognition stance, there also exists interpretations wherein the body is merely an input/output interface for cognitive processes that are entirely computational. In the present paper, we show that even if one takes such a strong computationalist position, the role of the body must be more than an interface to the world. To achieve human cognition, the computational mechanisms of a cognitive agent must be capable not only of appropriate reasoning over a given set of symbolic representations; they must in addition be capable of updating the representational framework itself (leading to the titular representational fluidity). We demonstrate this by considering the necessary properties that an artificial agent with these abilities need to possess. The core of the argument is that these updates must be falsifiable in the Popperian sense while simultaneously directing representational shifts in a direction that benefits the agent. We show that this is achieved by the progressive, bottom-up symbolic abstraction of low-level sensorimotor connections followed by top-down instantiation of testable perception-action hypotheses. We then discuss the fundamental limits of this representational updating capacity, concluding that only fully embodied learners exhibiting such a priori perception-action linkages are able to sufficiently ground spontaneously-generated symbolic representations and exhibit the full range of human cognitive capabilities. The present paper therefore has consequences both for the theoretical understanding of human cognition, and for the design of autonomous artificial agents

    Edit distance Kernelization of NP theorem proving for polynomial-time machine learning of proof heuristics

    Get PDF
    We outline a general strategy for the application of edit- distance based kernels to NP Theorem Proving in order to allow for polynomial-time machine learning of proof heuristics without the loss of sequential structural information associated with conventional feature- based machine learning. We provide a general short introduction to logic and proof considering a few important complexity results to set the scene and highlight the relevance of our findings

    On the utility of dreaming: a general model for how learning in artificial agents can benefit from data hallucination

    Get PDF
    We consider the benefits of dream mechanisms – that is, the ability to simulate new experiences based on past ones – in a machine learning context. Specifically, we are interested in learning for artificial agents that act in the world, and operationalize “dreaming” as a mechanism by which such an agent can use its own model of the learning environment to generate new hypotheses and training data. We first show that it is not necessarily a given that such a data-hallucination process is useful, since it can easily lead to a training set dominated by spurious imagined data until an ill-defined convergence point is reached. We then analyse a notably successful implementation of a machine learning-based dreaming mechanism by Ha and Schmidhuber (Ha, D., & Schmidhuber, J. (2018). World models. arXiv e-prints, arXiv:1803.10122). On that basis, we then develop a general framework by which an agent can generate simulated data to learn from in a manner that is beneficial to the agent. This, we argue, then forms a general method for an operationalized dream-like mechanism. We finish by demonstrating the general conditions under which such mechanisms can be useful in machine learning, wherein the implicit simulator inference and extrapolation involved in dreaming act without reinforcing inference error even when inference is incomplete
    • …
    corecore