468 research outputs found

    Computational Capabilities of Analog and Evolving Neural Networks over Infinite Input Streams

    Get PDF
    International audienceAnalog and evolving recurrent neural networks are super-Turing powerful. Here, we consider analog and evolving neural nets over infinite input streams. We then characterize the topological complexity of their ω-languages as a function of the specific analog or evolving weights that they employ. As a consequence, two infinite hierarchies of classes of analog and evolving neural networks based on the complexity of their underlying weights can be derived. These results constitute an optimal refinement of the super-Turing expressive power of analog and evolving neural networks. They show that analog and evolving neural nets represent natural models for oracle-based infinite computation

    Building a Neural Computer

    Get PDF
    In the work of [Siegelmann 95] it was showed that Artificial Recursive Neural Networks have the same computing power as Turing machines. A Turing machine can be programmed in a proper high-level language - the language of partial recursive functions. In this paper we present the implementation of a compiler that directly translates high-level Turing machine programs to Artificial Recursive Neural Networks. The application contains a simulator that can be used to test the resulting networks. We also argue that experiments like this compiler may give us clues on procedures for automatic synthesis of Artificial Recursive Neural Networks from high-level description

    The Machine as Data: A Computational View of Emergence and Definability

    Get PDF
    Turing’s (Proceedings of the London Mathematical Society 42:230–265, 1936) paper on computable numbers has played its role in underpinning different perspectives on the world of information. On the one hand, it encourages a digital ontology, with a perceived flatness of computational structure comprehensively hosting causality at the physical level and beyond. On the other (the main point of Turing’s paper), it can give an insight into the way in which higher order information arises and leads to loss of computational control—while demonstrating how the control can be re-established, in special circumstances, via suitable type reductions. We examine the classical computational framework more closely than is usual, drawing out lessons for the wider application of information–theoretical approaches to characterizing the real world. The problem which arises across a range of contexts is the characterizing of the balance of power between the complexity of informational structure (with emergence, chaos, randomness and ‘big data’ prominently on the scene) and the means available (simulation, codes, statistical sampling, human intuition, semantic constructs) to bring this information back into the computational fold. We proceed via appropriate mathematical modelling to a more coherent view of the computational structure of information, relevant to a wide spectrum of areas of investigation

    Spiking Neural P Systems: Stronger Normal Forms

    Get PDF
    Spiking neural P systems are computing devices recently introduced as a bridge between spiking neural nets and membrane computing. Thanks to the rapid research in this eld there exists already a series of both theoretical and application studies. In this paper we focus on normal forms of these systems while preserving their computational power. We study combinations of existing normal forms, showing that certain groups of them can be combined without loss of computational power, thus answering partially open problems stated in. We also extend some of the already known normal forms for spiking neural P systems considering determinism and strong acceptance condition. Normal forms can speed-up development and simplify future proofs in this area

    On the Bounds of Function Approximations

    Full text link
    Within machine learning, the subfield of Neural Architecture Search (NAS) has recently garnered research attention due to its ability to improve upon human-designed models. However, the computational requirements for finding an exact solution to this problem are often intractable, and the design of the search space still requires manual intervention. In this paper we attempt to establish a formalized framework from which we can better understand the computational bounds of NAS in relation to its search space. For this, we first reformulate the function approximation problem in terms of sequences of functions, and we call it the Function Approximation (FA) problem; then we show that it is computationally infeasible to devise a procedure that solves FA for all functions to zero error, regardless of the search space. We show also that such error will be minimal if a specific class of functions is present in the search space. Subsequently, we show that machine learning as a mathematical problem is a solution strategy for FA, albeit not an effective one, and further describe a stronger version of this approach: the Approximate Architectural Search Problem (a-ASP), which is the mathematical equivalent of NAS. We leverage the framework from this paper and results from the literature to describe the conditions under which a-ASP can potentially solve FA as well as an exhaustive search, but in polynomial time.Comment: Accepted as a full paper at ICANN 2019. The final, authenticated publication will be available at https://doi.org/10.1007/978-3-030-30487-4_3
    • …
    corecore