108,956 research outputs found

    Corresponding author:

    Get PDF
    Abstract: Evolution of belief systems has always been in focus of cognitive research. In this paper we delineate a new model describing belief systems as a network of statements considered true. Testing the model a small number of parameters enabled us to reproduce a variety of well-known mechanisms ranging from opinion changes to development of psychological problems. The selforganizing opinion structure showed a scale-free degree distribution. The novelty of our work lies in applying a convenient set of definitions allowing us to depict opinion network dynamics in a highly favorable way, which resulted in a scale-free belief network. As an additional benefit, we listed several conjectural consequences in a number of areas related to thinking and reasoning. * Title Page (With all author details listed) Modeling belief systems with scale-free networks Miklós Antal a and László Balogh b

    Modeling Scalability of Distributed Machine Learning

    Full text link
    Present day machine learning is computationally intensive and processes large amounts of data. It is implemented in a distributed fashion in order to address these scalability issues. The work is parallelized across a number of computing nodes. It is usually hard to estimate in advance how many nodes to use for a particular workload. We propose a simple framework for estimating the scalability of distributed machine learning algorithms. We measure the scalability by means of the speedup an algorithm achieves with more nodes. We propose time complexity models for gradient descent and graphical model inference. We validate our models with experiments on deep learning training and belief propagation. This framework was used to study the scalability of machine learning algorithms in Apache Spark.Comment: 6 pages, 4 figures, appears at ICDE 201

    Deep learning systems as complex networks

    Full text link
    Thanks to the availability of large scale digital datasets and massive amounts of computational power, deep learning algorithms can learn representations of data by exploiting multiple levels of abstraction. These machine learning methods have greatly improved the state-of-the-art in many challenging cognitive tasks, such as visual object recognition, speech processing, natural language understanding and automatic translation. In particular, one class of deep learning models, known as deep belief networks, can discover intricate statistical structure in large data sets in a completely unsupervised fashion, by learning a generative model of the data using Hebbian-like learning mechanisms. Although these self-organizing systems can be conveniently formalized within the framework of statistical mechanics, their internal functioning remains opaque, because their emergent dynamics cannot be solved analytically. In this article we propose to study deep belief networks using techniques commonly employed in the study of complex networks, in order to gain some insights into the structural and functional properties of the computational graph resulting from the learning process.Comment: 20 pages, 9 figure
    corecore