5,130 research outputs found

    On the Complexity of Limit Sets of Cellular Automata Associated with Probability Measures

    Get PDF
    We study the notion of limit sets of cellular automata associated with probability measures (mu-limit sets). This notion was introduced by P. Kurka and A. Maass. It is a refinement of the classical notion of omega-limit sets dealing with the typical long term behavior of cellular automata. It focuses on the words whose probability of appearance does not tend to 0 as time tends to infinity (the persistent words). In this paper, we give a characterisation of the persistent language for non sensible cellular automata associated with Bernouilli measures. We also study the computational complexity of these languages. We show that the persistent language can be non-recursive. But our main result is that the set of quasi-nilpotent cellular automata (those with a single configuration in their mu-limit set) is neither recursively enumerable nor co-recursively enumerable

    Conjugacy of one-dimensional one-sided cellular automata is undecidable

    Full text link
    Two cellular automata are strongly conjugate if there exists a shift-commuting conjugacy between them. We prove that the following two sets of pairs (F,G)(F,G) of one-dimensional one-sided cellular automata over a full shift are recursively inseparable: (i) pairs where FF has strictly larger topological entropy than GG, and (ii) pairs that are strongly conjugate and have zero topological entropy. Because there is no factor map from a lower entropy system to a higher entropy one, and there is no embedding of a higher entropy system into a lower entropy system, we also get as corollaries that the following decision problems are undecidable: Given two one-dimensional one-sided cellular automata FF and GG over a full shift: Are FF and GG conjugate? Is FF a factor of GG? Is FF a subsystem of GG? All of these are undecidable in both strong and weak variants (whether the homomorphism is required to commute with the shift or not, respectively). It also immediately follows that these results hold for one-dimensional two-sided cellular automata.Comment: 12 pages, 2 figures, accepted for SOFSEM 201

    Curvatons in the minimally supersymmetric standard model

    Full text link
    Curvaton is an effectively massless field whose energy density during inflation is negligible but which later becomes dominant. This is a novel mechanism to generate the scale invariant perturbations. I discuss the possibility that the curvaton could be found among the fields of the minimally supersymmetric standard model (MSSM), which contains a number of flat directions along which the renormalizable potential vanishes. The requirements of late domination and the absence of damping of the perturbations pick out essentially a unique candidate for the MSSM curvaton. One must also require that inflation takes place in a hidden sector. If the inflaton energy density can be radiated into extra dimensions, many constraints can be relaxed, and the simplest flat direction consisting of the Higgses H_u and H_d would provide a working example of an MSSM curvaton.Comment: 16 pages, 1 Figur

    The Loss of A True Love That Never Can Return: Travels of A Ballad

    Get PDF
    The broadside ballad "Sweet William" or "The Sailor Boy" is a plaintive story of love and loss which has travelled far over two hundred years. Because "Sweet William" is both a common tale and a tale of common people, its appeal is tested with each retelling or re-singing. Today, one might learn this ballad through a number of media ranging from informal transmission one-to-one to printed and recorded sources to cyberspace. This paper considers the shifting ecology of ballad transmission using a far-flung and living song as a lens

    Ecophysiology of two benthic amphipod species from the northern Baltic Sea

    Get PDF

    Optimization of utilization of test resources

    Get PDF
    Abstract. Limited testing resources are one of the most fundamental challenges in testing. Testing of complex systems will require very large numbers of test cases to provide an adequate level of testing. Coverage is a popular metric to state the level of testing. However, coverage alone is not always a good measure to describe the level of testing for two reasons. First, it does not provide information of how efficiently the testing resources were spent. Second, coverage does not contain knowledge of how close to the optimal utilization the testing is. This thesis proposes a way to measure the level of test resource utilization, and a way to estimate the distance from the optimal resource utilization. In this thesis a set of efficiency and performance metrics are defined to measure utilization of testing resources. The defined metrics consider the achieved coverage with respect to spent testing resources and the complexity of the tested system. Based on the defined metrics, an approximation formula for the maximum efficiency as a function of available testing resources is defined. A method to simplify complex equations by considering the states of equation is proposed. The defined metrics and proposed method are applied into a 3GPP equation, intended for a Long Term Evolution (LTE) device, to search a subset that maximizes the test resource utilization. The optimization of the utilization of test resources is viewed as a set cover problem, which is attempted so solve with various algorithms, such as brute force algorithm, classical Greedy Algorithm (GA), and a few of their variants and combinations. Performance of the algorithms are studied and compared. Performance results are presented, and the best results compared with the approximated maximum. It was observed that there was not a single algorithm that suits for all scenarios, but the choice of algorithms depends on the resources available. Brute force-based algorithms should be selected when there are scarce resources, and GA-based algorithms when resources are plentiful. Based on the results, the utilization of the test resources was maximized with a moderate number of test resources.Testiresurssien käytön optimointi. Tiivistelmä. Rajalliset testausresurssit ovat yksi keskeisimmistä haasteista testauksessa. Monimutkaisten järjestelmien testaus tarkoittaa usein todella suurta määrää testejä, jotta saavutettaisiin riittävä testauksen taso. Kattavuus on perinteinen tapa mitata testauksen tasoa. Kattavuus ilmaisee absoluuttisen testauksen tason testattujen ja testaamattomien osioiden suhteena. Kattavuus yksin ei ole paras tapa kuvata testauksen tasoa kahdesta syystä. Kattavuus ei ilmaise kuinka tehokkaasti testaus resurssit käytettiin. Kattavuus ei myöskään kerro kuinka lähellä optimaalista testaus resurssien käyttöä oltiin. Tässä diplomi työssä esitetään vaihtoehtoinen tapa mitata testauksen tasoa, sekä keinon arvioida, kuinka lähellä ollaan optimaalista testausta. Tässä työssä määritellään joukko metriikoita, joilla mitataan kuinka tehokkaasti testausresurssit käytetään hyödyksi. Metriikat ottavat huomioon saavutetun kattavuuden suhteessa käytettyihin resursseihin sekä testattavan järjestelmän monimutkaisuuden. Määriteltyjen metriikoiden pohjalta määritellään approksimaatiokaava, joka ilmaisee suurimman mahdollisen hyötysuhteen resurssien määrän funktiona. Menetelmä monimutkaisten yhtälöiden yksinkertaistamiseen käsittelemällä yhtälön tiloja ehdotetaan. Määriteltyjä metriikoita sekä ehdotettua menetelmää sovelletaan Long Term Evolution (LTE) laitteelle tarkoitettuun 3GPP kaavaan, ja pyritään löytämään testijoukko, joka optimoi testausresurssien käytön. Testausresurssien optimointia käsitellään joukko kattavuus ongelmana, jota yritetään ratkaista useilla algoritmeilla, kuten raaka voima haku algoritmilla sekä ahneella algoritmilla, sekä muutamalla näiden kahden variaatiolla ja yhdistelmällä. Algoritmien tulokset esitetään ja niitä vertaillaan. Parhaita tuloksia verrataan approksimoituun maksimitehokkuuteen. Työssä havaitaan että yksi algoritmi ei sovellu joka tilanteeseen, vaan paras algoritmi riippuu käytettävissä olevien resurssien määrästä. Raaka voima algoritmi saavuttaa parhaan tuloksen pienille resurssimäärille, kun taas ahne algoritmi suurille. Tulosten perusteella paras testausresurssien hyödyntäminen saavutetaan kohtalaisella resurssimäärällä

    Energy loss in a fluctuating hydrodynamical background

    Full text link
    Recently it has become apparent that event-by-event fluctuations in the initial state of hydrodynamical modelling of ultrarelativistic heavy-ion collisions are crucial in order to understand the full centrality dependence of the elliptic flow coefficient v_2. In particular, in central collisions the density fluctuations play a major role in generating the spatial eccentricity in the initial state. This raises the question to what degree high P_T physics, in particular leading-parton energy loss, which takes place in the background of an evolving medium, is sensitive to the presence of the event-by-event density fluctuations in the background. In this work, we report results for the effects of fluctuations on the nuclear modification factor R_AA in both central and noncentral sqrt(s_NN) = 200 GeV Au+Au collisions at RHIC. Two different types of energy-loss models, a radiative and an elastic, are considered. In particular, we study the dependence of the results on the assumed spatial size of the density fluctuations, and discuss the angular modulation of R_AA with respect to the event plane.Comment: 9 pages, 9 figure

    Aperiodic tilings and entropy

    Full text link
    In this paper we present a construction of Kari-Culik aperiodic tile set - the smallest known until now. With the help of this construction, we prove that this tileset has positive entropy. We also explain why this result was not expected
    corecore