1,492 research outputs found

    EMERGING THE EMERGENCE SOCIOLOGY: The Philosophical Framework of Agent-Based Social Studies

    Get PDF
    The structuration theory originally provided by Anthony Giddens and the advance improvement of the theory has been trying to solve the dilemma came up in the epistemological aspects of the social sciences and humanity. Social scientists apparently have to choose whether they are too sociological or too psychological. Nonetheless, in the works of the classical sociologist, Emile Durkheim, this thing has been stated long time ago. The usage of some models to construct the bottom-up theories has followed the vast of computational technology. This model is well known as the agent based modeling. This paper is giving a philosophical perspective of the agent-based social sciences, as the sociology to cope the emergent factors coming up in the sociological analysis. The framework is made by using the artificial neural network model to show how the emergent phenomena came from the complex system. Understanding the society has self-organizing (autopoietic) properties, the Kohonen’s self-organizing map is used in the paper. By the simulation examples, it can be seen obviously that the emergent phenomena in social system are seen by the sociologist apart from the qualitative framework on the atomistic sociology. In the end of the paper, it is clear that the emergence sociology is needed for sharpening the sociological analysis in the emergence sociology

    Cognitive Deficit of Deep Learning in Numerosity

    Full text link
    Subitizing, or the sense of small natural numbers, is an innate cognitive function of humans and primates; it responds to visual stimuli prior to the development of any symbolic skills, language or arithmetic. Given successes of deep learning (DL) in tasks of visual intelligence and given the primitivity of number sense, a tantalizing question is whether DL can comprehend numbers and perform subitizing. But somewhat disappointingly, extensive experiments of the type of cognitive psychology demonstrate that the examples-driven black box DL cannot see through superficial variations in visual representations and distill the abstract notion of natural number, a task that children perform with high accuracy and confidence. The failure is apparently due to the learning method not the CNN computational machinery itself. A recurrent neural network capable of subitizing does exist, which we construct by encoding a mechanism of mathematical morphology into the CNN convolutional kernels. Also, we investigate, using subitizing as a test bed, the ways to aid the black box DL by cognitive priors derived from human insight. Our findings are mixed and interesting, pointing to both cognitive deficit of pure DL, and some measured successes of boosting DL by predetermined cognitive implements. This case study of DL in cognitive computing is meaningful for visual numerosity represents a minimum level of human intelligence.Comment: Accepted for presentation at the AAAI-1

    Backpropagation training in adaptive quantum networks

    Full text link
    We introduce a robust, error-tolerant adaptive training algorithm for generalized learning paradigms in high-dimensional superposed quantum networks, or \emph{adaptive quantum networks}. The formalized procedure applies standard backpropagation training across a coherent ensemble of discrete topological configurations of individual neural networks, each of which is formally merged into appropriate linear superposition within a predefined, decoherence-free subspace. Quantum parallelism facilitates simultaneous training and revision of the system within this coherent state space, resulting in accelerated convergence to a stable network attractor under consequent iteration of the implemented backpropagation algorithm. Parallel evolution of linear superposed networks incorporating backpropagation training provides quantitative, numerical indications for optimization of both single-neuron activation functions and optimal reconfiguration of whole-network quantum structure.Comment: Talk presented at "Quantum Structures - 2008", Gdansk, Polan

    Topological Gradient-based Competitive Learning

    Get PDF
    Topological learning is a wide research area aiming at uncovering the mutual spatial relationships between the elements of a set. Some of the most common and oldest approaches involve the use of unsupervised competitive neural networks. However, these methods are not based on gradient optimization which has been proven to provide striking results in feature extraction also in unsupervised learning. Unfortunately, by focusing mostly on algorithmic efficiency and accuracy, deep clustering techniques are composed of overly complex feature extractors, while using trivial algorithms in their top layer. The aim of this work is to present a novel comprehensive theory aspiring at bridging competitive learning with gradient-based learning, thus allowing the use of extremely powerful deep neural networks for feature extraction and projection combined with the remarkable flexibility and expressiveness of competitive learning. In this paper we fully demonstrate the theoretical equivalence of two novel gradient-based competitive layers. Preliminary experiments show how the dual approach, trained on the transpose of the input matrix i.e. X T , lead to faster convergence rate and higher training accuracy both in low and high-dimensional scenarios
    • …
    corecore