633 research outputs found

    Studying Parallel Evolutionary Algorithms: The cellular Programming Case

    Get PDF
    Parallel evolutionary algorithms, studied to some extent over the past few years, have proven empirically worthwhile—though there seems to be lacking a better understanding of their workings. In this paper we concentrate on cellular (fine-grained) models, presenting a number of statistical measures, both at the genotypic and phenotypic levels. We demonstrate the application and utility of these measures on a specific example, that of the cellular programming evolutionary algorithm, when used to evolve solutions to a hard problem in the cellular-automata domain, known as synchronization

    Parity Problem With A Cellular Automaton Solution

    Get PDF
    The parity of a bit string of length NN is a global quantity that can be efficiently compute using a global counter in O(N){O} (N) time. But is it possible to find the parity using cellular automata with a set of local rule tables without using any global counter? Here, we report a way to solve this problem using a number of r=1r=1 binary, uniform, parallel and deterministic cellular automata applied in succession for a total of O(N2){O} (N^2) time.Comment: Revtex, 4 pages, final version accepted by Phys.Rev.

    A Simple Cellular Automation that Solves the Density and Ordering Problems

    Get PDF
    Cellular automata (CA) are discrete, dynamical systems that perform computations in a distributed fashion on a spatially extended grid. The dynamical behavior of a CA may give rise to emergent computation, referring to the appearance of global information processing capabilities that are not explicitly represented in the system's elementary components nor in their local interconnections.1 As such, CAs o?er an austere yet versatile model for studying natural phenomena, as well as a powerful paradigm for attaining ?ne-grained, massively parallel computation. An example of such emergent computation is to use a CA to determine the global density of bits in an initial state con?guration. This problem, known as density classi?cation, has been studied quite intensively over the past few years. In this short communication we describe two previous versions of the problem along with their CA solutions, and then go on to show that there exists yet a third version | which admits a simple solution

    Pseudorandom number generation based on controllable cellular automata

    Get PDF
    A novel Cellular Automata (CA) Controllable CA (CCA) is proposed in this paper. Further, CCA are applied in Pseudorandom Number Generation. Randomness test results on CCA Pseudorandom Number Generators (PRNGs) show that they are better than 1-d CA PRNGs and can be comparable to 2-d ones. But they do not lose the structure simplicity of 1-d CA. Further, we develop several different types of CCA PRNGs. Based on the comparison of the randomness of different CCA PRNGs, we find that their properties are decided by the actions of the controllable cells and their neighbors. These novel CCA may be applied in other applications where structure non-uniformity or asymmetry is desired

    Non-deterministic density classification with diffusive probabilistic cellular automata

    Full text link
    We present a probabilistic cellular automaton (CA) with two absorbing states which performs classification of binary strings in a non-deterministic sense. In a system evolving under this CA rule, empty sites become occupied with a probability proportional to the number of occupied sites in the neighborhood, while occupied sites become empty with a probability proportional to the number of empty sites in the neighborhood. The probability that all sites become eventually occupied is equal to the density of occupied sites in the initial string.Comment: 4 pages, 4 figure

    On the Parity Problem in One-Dimensional Cellular Automata

    Full text link
    We consider the parity problem in one-dimensional, binary, circular cellular automata: if the initial configuration contains an odd number of 1s, the lattice should converge to all 1s; otherwise, it should converge to all 0s. It is easy to see that the problem is ill-defined for even-sized lattices (which, by definition, would never be able to converge to 1). We then consider only odd lattices. We are interested in determining the minimal neighbourhood that allows the problem to be solvable for any initial configuration. On the one hand, we show that radius 2 is not sufficient, proving that there exists no radius 2 rule that can possibly solve the parity problem from arbitrary initial configurations. On the other hand, we design a radius 4 rule that converges correctly for any initial configuration and we formally prove its correctness. Whether or not there exists a radius 3 rule that solves the parity problem remains an open problem.Comment: In Proceedings AUTOMATA&JAC 2012, arXiv:1208.249

    Interaction of quasilocal harmonic modes and boson peak in glasses

    Full text link
    The direct proportionality relation between the boson peak maximum in glasses, ωb\omega_b, and the Ioffe-Regel crossover frequency for phonons, ωd\omega_d, is established. For several investigated materials ωb=(1.5±0.1)ωd\omega_b = (1.5\pm 0.1)\omega_d. At the frequency ωd\omega_d the mean free path of the phonons ll becomes equal to their wavelength because of strong resonant scattering on quasilocal harmonic oscillators. Above this frequency phonons cease to exist. We prove that the established correlation between ωb\omega_b and ωd\omega_d holds in the general case and is a direct consequence of bilinear coupling of quasilocal oscillators with the strain field.Comment: RevTex, 4 pages, 1 figur

    Classy Ensemble: A Novel Ensemble Algorithm for Classification

    Full text link
    We present Classy Ensemble, a novel ensemble-generation algorithm for classification tasks, which aggregates models through a weighted combination of per-class accuracy. Tested over 153 machine learning datasets we demonstrate that Classy Ensemble outperforms two other well-known aggregation algorithms -- order-based pruning and clustering-based pruning -- as well as the recently introduced lexigarden ensemble generator. We then present three enhancements: 1) Classy Cluster Ensemble, which combines Classy Ensemble and cluster-based pruning; 2) Deep Learning experiments, showing the merits of Classy Ensemble over four image datasets: Fashion MNIST, CIFAR10, CIFAR100, and ImageNet; and 3) Classy Evolutionary Ensemble, wherein an evolutionary algorithm is used to select the set of models which Classy Ensemble picks from. This latter, combining learning and evolution, resulted in improved performance on the hardest dataset

    High Per Parameter: A Large-Scale Study of Hyperparameter Tuning for Machine Learning Algorithms

    Full text link
    Hyperparameters in machine learning (ML) have received a fair amount of attention, and hyperparameter tuning has come to be regarded as an important step in the ML pipeline. But just how useful is said tuning? While smaller-scale experiments have been previously conducted, herein we carry out a large-scale investigation, specifically, one involving 26 ML algorithms, 250 datasets (regression and both binary and multinomial classification), 6 score metrics, and 28,857,600 algorithm runs. Analyzing the results we conclude that for many ML algorithms we should not expect considerable gains from hyperparameter tuning on average, however, there may be some datasets for which default hyperparameters perform poorly, this latter being truer for some algorithms than others. By defining a single hp_score value, which combines an algorithm's accumulated statistics, we are able to rank the 26 ML algorithms from those expected to gain the most from hyperparameter tuning to those expected to gain the least. We believe such a study may serve ML practitioners at large
    corecore