5,973 research outputs found

    The B36/S125 "2x2" Life-Like Cellular Automaton

    Full text link
    The B36/S125 (or "2x2") cellular automaton is one that takes place on a 2D square lattice much like Conway's Game of Life. Although it exhibits high-level behaviour that is similar to Life, such as chaotic but eventually stable evolution and the existence of a natural diagonal glider, the individual objects that the rule contains generally look very different from their Life counterparts. In this article, a history of notable discoveries in the 2x2 rule is provided, and the fundamental patterns of the automaton are described. Some theoretical results are derived along the way, including a proof that the speed limits for diagonal and orthogonal spaceships in this rule are c/3 and c/2, respectively. A Margolus block cellular automaton that 2x2 emulates is investigated, and in particular a family of oscillators made up entirely of 2 x 2 blocks are analyzed and used to show that there exist oscillators with period 2^m(2^k - 1) for any integers m,k \geq 1.Comment: 18 pages, 19 figure

    On computational irreducibility and the predictability of complex physical systems

    Full text link
    Using elementary cellular automata (CA) as an example, we show how to coarse-grain CA in all classes of Wolfram's classification. We find that computationally irreducible (CIR) physical processes can be predictable and even computationally reducible at a coarse-grained level of description. The resulting coarse-grained CA which we construct emulate the large-scale behavior of the original systems without accounting for small-scale details. At least one of the CA that can be coarse-grained is irreducible and known to be a universal Turing machine.Comment: 4 pages, 2 figures, to be published in PR

    The rank of the semigroup of transformations stabilising a partition of a finite set

    Full text link
    Let P\mathcal{P} be a partition of a finite set XX. We say that a full transformation f:XXf:X\to X preserves (or stabilizes) the partition P\mathcal{P} if for all PPP\in \mathcal{P} there exists QPQ\in \mathcal{P} such that PfQPf\subseteq Q. Let T(X,P)T(X,\mathcal{P}) denote the semigroup of all full transformations of XX that preserve the partition P\mathcal{P}. In 2005 Huisheng found an upper bound for the minimum size of the generating sets of T(X,P)T(X,\mathcal{P}), when P\mathcal{P} is a partition in which all of its parts have the same size. In addition, Huisheng conjectured that his bound was exact. In 2009 the first and last authors used representation theory to completely solve Hisheng's conjecture. The goal of this paper is to solve the much more complex problem of finding the minimum size of the generating sets of T(X,P)T(X,\mathcal{P}), when P\mathcal{P} is an arbitrary partition. Again we use representation theory to find the minimum number of elements needed to generate the wreath product of finitely many symmetric groups, and then use this result to solve the problem. The paper ends with a number of problems for experts in group and semigroup theories

    Comptonization and the Spectra of Accretion-Powered X-Ray Pulsars

    Full text link
    Accretion-powered X-ray pulsars are among the most luminous X-ray sources in the Galaxy. However, despite decades of theoretical and observational work since their discovery, no satisfactory model for the formation of the observed X-ray spectra has emerged. In this paper, we report on a self-consistent calculation of the spectrum emerging from a pulsar accretion column that includes an explicit treatment of the bulk and thermal Comptonization occurring in the radiation-dominated shocks that form in the accretion flows. Using a rigorous eigenfunction expansion method, we obtain a closed-form expression for the Green's function describing the upscattering of monochromatic radiation injected into the column. The Green's function is convolved with bremsstrahlung, cyclotron, and blackbody source terms to calculate the emergent photon spectrum. We show that energization of photons in the shock naturally produces an X-ray spectrum with a relatively flat continuum and a high-energy exponential cutoff. Finally, we demonstrate that our model yields good agreement with the spectra of the bright pulsar Her X-1 and the low luminosity pulsar X Per.Comment: 6 Pages, 2 Figures, To appear in "The Multicoloured Landscape of Compact Objects and their Explosive Progenitors" (Cefalu, Sicily, June 2006). Eds. L. Burderi et al. (New York: AIP

    Choice mechanisms for past, temporally extended outcomes.

    Get PDF
    Accurate retrospection is critical in many decision scenarios ranging from investment banking to hedonic psychology. A notoriously difficult case is to integrate previously perceived values over the duration of an experience. Failure in retrospective evaluation leads to suboptimal outcome when previous experiences are under consideration for revisit. A biologically plausible mechanism underlying evaluation of temporally extended outcomes is leaky integration of evidence. The leaky integrator favours positive temporal contrasts, in turn leading to undue emphasis on recency. To investigate choice mechanisms underlying suboptimal outcome based on retrospective evaluation, we used computational and behavioural techniques to model choice between perceived extended outcomes with different temporal profiles. Second-price auctions served to establish the perceived values of virtual coins offered sequentially to humans in a rapid monetary gambling task. Results show that lesser-valued options involving successive growth were systematically preferred to better options with declining temporal profiles. The disadvantageous inclination towards persistent growth was mitigated in some individuals in whom a longer time constant of the leaky integrator resulted in fewer violations of dominance. These results demonstrate how focusing on immediate gains is less beneficial than considering longer perspectives.This research was supported by the Wellcome Trust Grants 095495 and 093270 and European Research Council Advanced Grant ERC-2011-AdG 293549.This is the final version. It was first published by Royal Society Publishing at http://rspb.royalsocietypublishing.org/content/282/1810/20141766

    O(alpha_s^2) QCD corrections to the electroproduction of hadrons with high transverse momentum

    Full text link
    We compute the order alpha_s^2 corrections to the one particle inclusive electroproduction cross section of hadrons with non vanishing transverse momentum. We perform the full calculation analytically, and obtain the expression of the factorized (finite) cross section at this order. We compare our results with H1 data on forward production of pi^0, and discuss the phenomenological implications of the rather large higher order contributions obtained in that case. Specifically, we analyze the cross section sensitivity to the factorization and renormalization scales, and to the input fragmentation functions, over the kinematical region covered by data. We conclude that the data is well described by the O(alpha_s^2) predictions within the theoretical uncertainties and without the inclusion of any physics content beyond the DGLAP approach.Comment: 11 pages, LaTeX, 7 figure

    On the numerical analysis of triplet pair production cross-sections and the mean energy of produced particles for modelling electron-photon cascade in a soft photon field

    Full text link
    The double and single differential cross-sections with respect to positron and electron energies as well as the total cross-section of triplet production in the laboratory frame are calculated numerically in order to develop a Monte Carlo code for modelling electron-photon cascades in a soft photon field. To avoid numerical integration irregularities of the integrands, which are inherent to problems of this type, we have used suitable substitutions in combination with a modern powerful program code Mathematica allowing one to achieve reliable higher-precission results. The results obtained for the total cross-section closely agree with others estimated analytically or by a different numerical approach. The results for the double and single differential cross-sections turn out to be somewhat different from some reported recently. The mean energy of the produced particles, as a function of the characteristic collisional parameter (the electron rest frame photon energy), is calculated and approximated by an analytical expression that revises other known approximations over a wide range of values of the argument. The primary-electron energy loss rate due to triplet pair production is shown to prevail over the inverse Compton scattering loss rate at several (\sim2) orders of magnitude higher interaction energy than that predicted formerly.Comment: 18 pages, 8 figures, 2 tables, LaTex2e, Iopart.cls, Iopart12.clo, Iopams.st

    On the Kauffman bracket skein module of the quaternionic manifold

    Full text link
    We use recoupling theory to study the Kauffman bracket skein module of the quaternionic manifold over Z[A,A^{-1}] localized by inverting all the cyclotomic polynomials. We prove that the skein module is spanned by five elements. Using the quantum invariants of these skein elements and the Z_2 homology of the manifold, we determine that they are linearly independent.Comment: corrected summation signs in figures 14, 15, 17. Other minor change

    Optimization of 2-d lattice cellular automata for pseudorandom number generation

    Get PDF
    This paper proposes a generalized approach to 2-d CA PRNGs – the 2-d lattice CA PRNG – by introducing vertical connections to arrays of 1-d CA. The structure of a 2-d lattice CA PRNG lies in between that of 1-d CA and 2-d CA grid PRNGs. With the generalized approach, 2-d lattice CA PRNG offers more 2-d CA PRNG variations. It is found that they can do better than the conventional 2-d CA grid PRNGs. In this paper, the structure and properties of 2-d lattice CA are explored by varying the number and location of vertical connections, and by searching for different 2-d array settings that can give good randomness based on Diehard test. To get the most out of 2-d lattice CA PRNGs, genetic algorithm is employed in searching for good neighborhood characteristics. By adopting an evolutionary approach, the randomness quality of 2-d lattice CA PRNGs is optimized. In this paper, a new metric, #rn is introduced as a way of finding a 2-d lattice CA PRNG with the least number of cells required to pass Diehard test. Following the introduction of the new metric #rn, a cropping technique is presented to further boost the CA PRNG performance. The cost and efficiency of 2-d lattice CA PRNG is compared with past works on CA PRNGs
    corecore