40,197 research outputs found

    Mathematical Foundations of Consciousness

    Get PDF
    We employ the Zermelo-Fraenkel Axioms that characterize sets as mathematical primitives. The Anti-foundation Axiom plays a significant role in our development, since among other of its features, its replacement for the Axiom of Foundation in the Zermelo-Fraenkel Axioms motivates Platonic interpretations. These interpretations also depend on such allied notions for sets as pictures, graphs, decorations, labelings and various mappings that we use. A syntax and semantics of operators acting on sets is developed. Such features enable construction of a theory of non-well-founded sets that we use to frame mathematical foundations of consciousness. To do this we introduce a supplementary axiomatic system that characterizes experience and consciousness as primitives. The new axioms proceed through characterization of so- called consciousness operators. The Russell operator plays a central role and is shown to be one example of a consciousness operator. Neural networks supply striking examples of non-well-founded graphs the decorations of which generate associated sets, each with a Platonic aspect. Employing our foundations, we show how the supervening of consciousness on its neural correlates in the brain enables the framing of a theory of consciousness by applying appropriate consciousness operators to the generated sets in question

    Geometric deep learning and equivariant neural networks

    Get PDF
    We survey the mathematical foundations of geometric deep learning, focusing on group equivariant and gauge equivariant neural networks. We develop gauge equivariant convolutional neural networks on arbitrary manifolds M using principal bundles with structure group K and equivariant maps between sections of associated vector bundles. We also discuss group equivariant neural networks for homogeneous spaces M= G/ K , which are instead equivariant with respect to the global symmetry G on M . Group equivariant layers can be interpreted as intertwiners between induced representations of G, and we show their relation to gauge equivariant convolutional layers. We analyze several applications of this formalism, including semantic segmentation and object detection networks. We also discuss the case of spherical networks in great detail, corresponding to the case M= S2= SO (3) / SO (2) . Here we emphasize the use of Fourier analysis involving Wigner matrices, spherical harmonics and Clebsch–Gordan coefficients for G= SO (3) , illustrating the power of representation theory for deep learning

    Neurocognitive Informatics Manifesto.

    Get PDF
    Informatics studies all aspects of the structure of natural and artificial information systems. Theoretical and abstract approaches to information have made great advances, but human information processing is still unmatched in many areas, including information management, representation and understanding. Neurocognitive informatics is a new, emerging field that should help to improve the matching of artificial and natural systems, and inspire better computational algorithms to solve problems that are still beyond the reach of machines. In this position paper examples of neurocognitive inspirations and promising directions in this area are given

    Models of Cognition: Neurological possibility does not indicate neurological plausibility

    Get PDF
    Many activities in Cognitive Science involve complex computer models and simulations of both theoretical and real entities. Artificial Intelligence and the study of artificial neural nets in particular, are seen as major contributors in the quest for understanding the human mind. Computational models serve as objects of experimentation, and results from these virtual experiments are tacitly included in the framework of empirical science. Cognitive functions, like learning to speak, or discovering syntactical structures in language, have been modeled and these models are the basis for many claims about human cognitive capacities. Artificial neural nets (ANNs) have had some successes in the field of Artificial Intelligence, but the results from experiments with simple ANNs may have little value in explaining cognitive functions. The problem seems to be in relating cognitive concepts that belong in the `top-down' approach to models grounded in the `bottom-up' connectionist methodology. Merging the two fundamentally different paradigms within a single model can obfuscate what is really modeled. When the tools (simple artificial neural networks) to solve the problems (explaining aspects of higher cognitive functions) are mismatched, models with little value in terms of explaining functions of the human mind are produced. The ability to learn functions from data-points makes ANNs very attractive analytical tools. These tools can be developed into valuable models, if the data is adequate and a meaningful interpretation of the data is possible. The problem is, that with appropriate data and labels that fit the desired level of description, almost any function can be modeled. It is my argument that small networks offer a universal framework for modeling any conceivable cognitive theory, so that neurological possibility can be demonstrated easily with relatively simple models. However, a model demonstrating the possibility of implementation of a cognitive function using a distributed methodology, does not necessarily add support to any claims or assumptions that the cognitive function in question, is neurologically plausible

    What is Computational Intelligence and where is it going?

    Get PDF
    What is Computational Intelligence (CI) and what are its relations with Artificial Intelligence (AI)? A brief survey of the scope of CI journals and books with ``computational intelligence'' in their title shows that at present it is an umbrella for three core technologies (neural, fuzzy and evolutionary), their applications, and selected fashionable pattern recognition methods. At present CI has no comprehensive foundations and is more a bag of tricks than a solid branch of science. The change of focus from methods to challenging problems is advocated, with CI defined as a part of computer and engineering sciences devoted to solution of non-algoritmizable problems. In this view AI is a part of CI focused on problems related to higher cognitive functions, while the rest of the CI community works on problems related to perception and control, or lower cognitive functions. Grand challenges on both sides of this spectrum are addressed
    • …
    corecore