4,208 research outputs found

    Local Causal States and Discrete Coherent Structures

    Get PDF
    Coherent structures form spontaneously in nonlinear spatiotemporal systems and are found at all spatial scales in natural phenomena from laboratory hydrodynamic flows and chemical reactions to ocean, atmosphere, and planetary climate dynamics. Phenomenologically, they appear as key components that organize the macroscopic behaviors in such systems. Despite a century of effort, they have eluded rigorous analysis and empirical prediction, with progress being made only recently. As a step in this, we present a formal theory of coherent structures in fully-discrete dynamical field theories. It builds on the notion of structure introduced by computational mechanics, generalizing it to a local spatiotemporal setting. The analysis' main tool employs the \localstates, which are used to uncover a system's hidden spatiotemporal symmetries and which identify coherent structures as spatially-localized deviations from those symmetries. The approach is behavior-driven in the sense that it does not rely on directly analyzing spatiotemporal equations of motion, rather it considers only the spatiotemporal fields a system generates. As such, it offers an unsupervised approach to discover and describe coherent structures. We illustrate the approach by analyzing coherent structures generated by elementary cellular automata, comparing the results with an earlier, dynamic-invariant-set approach that decomposes fields into domains, particles, and particle interactions.Comment: 27 pages, 10 figures; http://csc.ucdavis.edu/~cmg/compmech/pubs/dcs.ht

    Computation in Finitary Stochastic and Quantum Processes

    Full text link
    We introduce stochastic and quantum finite-state transducers as computation-theoretic models of classical stochastic and quantum finitary processes. Formal process languages, representing the distribution over a process's behaviors, are recognized and generated by suitable specializations. We characterize and compare deterministic and nondeterministic versions, summarizing their relative computational power in a hierarchy of finitary process languages. Quantum finite-state transducers and generators are a first step toward a computation-theoretic analysis of individual, repeatedly measured quantum dynamical systems. They are explored via several physical systems, including an iterated beam splitter, an atom in a magnetic field, and atoms in an ion trap--a special case of which implements the Deutsch quantum algorithm. We show that these systems' behaviors, and so their information processing capacity, depends sensitively on the measurement protocol.Comment: 25 pages, 16 figures, 1 table; http://cse.ucdavis.edu/~cmg; numerous corrections and update

    Density Classification Quality of the Traffic-majority Rules

    Full text link
    The density classification task is a famous problem in the theory of cellular automata. It is unsolvable for deterministic automata, but recently solutions for stochastic cellular automata have been found. One of them is a set of stochastic transition rules depending on a parameter η\eta, the traffic-majority rules. Here I derive a simplified model for these cellular automata. It is valid for a subset of the initial configurations and uses random walks and generating functions. I compare its prediction with computer simulations and show that it expresses recognition quality and time correctly for a large range of η\eta values.Comment: 40 pages, 9 figures. Accepted by the Journal of Cellular Automata. (Some typos corrected; the numbers for theorems, lemmas and definitions have changed with respect to version 1.

    Automatic Filters for the Detection of Coherent Structure in Spatiotemporal Systems

    Full text link
    Most current methods for identifying coherent structures in spatially-extended systems rely on prior information about the form which those structures take. Here we present two new approaches to automatically filter the changing configurations of spatial dynamical systems and extract coherent structures. One, local sensitivity filtering, is a modification of the local Lyapunov exponent approach suitable to cellular automata and other discrete spatial systems. The other, local statistical complexity filtering, calculates the amount of information needed for optimal prediction of the system's behavior in the vicinity of a given point. By examining the changing spatiotemporal distributions of these quantities, we can find the coherent structures in a variety of pattern-forming cellular automata, without needing to guess or postulate the form of that structure. We apply both filters to elementary and cyclical cellular automata (ECA and CCA) and find that they readily identify particles, domains and other more complicated structures. We compare the results from ECA with earlier ones based upon the theory of formal languages, and the results from CCA with a more traditional approach based on an order parameter and free energy. While sensitivity and statistical complexity are equally adept at uncovering structure, they are based on different system properties (dynamical and probabilistic, respectively), and provide complementary information.Comment: 16 pages, 21 figures. Figures considerably compressed to fit arxiv requirements; write first author for higher-resolution version

    From Models to Simulations

    Get PDF
    This book analyses the impact computerization has had on contemporary science and explains the origins, technical nature and epistemological consequences of the current decisive interplay between technology and science: an intertwining of formalism, computation, data acquisition, data and visualization and how these factors have led to the spread of simulation models since the 1950s. Using historical, comparative and interpretative case studies from a range of disciplines, with a particular emphasis on the case of plant studies, the author shows how and why computers, data treatment devices and programming languages have occasioned a gradual but irresistible and massive shift from mathematical models to computer simulations

    Multiple verification in computational modeling of bone pathologies

    Full text link
    We introduce a model checking approach to diagnose the emerging of bone pathologies. The implementation of a new model of bone remodeling in PRISM has led to an interesting characterization of osteoporosis as a defective bone remodeling dynamics with respect to other bone pathologies. Our approach allows to derive three types of model checking-based diagnostic estimators. The first diagnostic measure focuses on the level of bone mineral density, which is currently used in medical practice. In addition, we have introduced a novel diagnostic estimator which uses the full patient clinical record, here simulated using the modeling framework. This estimator detects rapid (months) negative changes in bone mineral density. Independently of the actual bone mineral density, when the decrease occurs rapidly it is important to alarm the patient and monitor him/her more closely to detect insurgence of other bone co-morbidities. A third estimator takes into account the variance of the bone density, which could address the investigation of metabolic syndromes, diabetes and cancer. Our implementation could make use of different logical combinations of these statistical estimators and could incorporate other biomarkers for other systemic co-morbidities (for example diabetes and thalassemia). We are delighted to report that the combination of stochastic modeling with formal methods motivate new diagnostic framework for complex pathologies. In particular our approach takes into consideration important properties of biosystems such as multiscale and self-adaptiveness. The multi-diagnosis could be further expanded, inching towards the complexity of human diseases. Finally, we briefly introduce self-adaptiveness in formal methods which is a key property in the regulative mechanisms of biological systems and well known in other mathematical and engineering areas.Comment: In Proceedings CompMod 2011, arXiv:1109.104

    L’INTELLECT INCARNÉ: Sur les interprétations computationnelles, évolutives et philosophiques de la connaissance

    Get PDF
    Modern cognitive science cannot be understood without recent developments in computer science, artificial intelligence (AI), robotics, neuroscience, biology, linguistics, and psychology. Classic analytic philosophy as well as traditional AI assumed that all kinds of knowledge must eplicitly be represented by formal or programming languages. This assumption is in contradiction to recent insights into the biology of evolution and developmental psychology of the human organism. Most of our knowledge is implicit and unconscious. It is not formally represented, but embodied knowledge which is learnt by doing and understood by bodily interacting with ecological niches and social environments. That is true not only for low-level skills, but even for high-level domains of categorization, language, and abstract thinking. Embodied cognitive science, AI, and robotics try to build the embodied mind in an artificial evolution. From a philosophical point of view, it is amazing that the new ideas of embodied mind and robotics have deep roots in 20th-century philosophy.Die moderne Kognitionswissenschaft kann nicht verstanden werden ohne Einbeziehung der neuesten Errungenschaften aus der Computerwissenschaft, künstlichen Intelligenz (AI), Robotik, Neurowissenschaft, Biologie, Linguistik und Psychologie. Die klassische analytische Philosophie, wie auch die traditionelle AI, setzten voraus, dass alle Arten des Wissens explizit durch formale oder Programmsprachen dargestellt werden müssen. Diese Annahme steht im Widerspruch zu den rezenten Einsichten in die Evolutionsbiologie und Entwicklungspsychologie des menschlichen Organismus. Der größte Teil unseres Wissens ist implizit und unbewusst. Es ist kein formal repräsentiertes, sondern ein verkörpertes Wissen, das durch Handeln gelernt und durch körperliche Interaktion mit ökologischen Nischen und gesellschaftlichen Umgebungen verstanden wird. Dies gilt nicht nur für niedere Fertigkeiten, sondern auch für höher gestellte Domänen: Kategorisierung, Sprache und abstraktes Denken. Die verkörperte Erkenntniswissenschaft, AI und Robotik versuchen, den verkörperten Geist in einer artifiziellen Evolution zu bilden. Vom philosophischen Standpunkt gesehen ist es erstaunlich, wie tief die neuen Ideen des verkörperten Geistes und der Robotik in der Philosophie des 20. Jahrhunderts verankert sind.La science cognitive moderne ne peut être comprise sans les progrès récents en informatique, intelligence artificielle, robotique, neuroscience, biologie, linguistique et psychologie. La philosophie analytique classique et l’intelligence artificielle traditionnelle présumaient que toutes les sortes de savoir devaient être représentées explicitement par des langages formels ou programmatiques. Cette thèse est en contradiction avec les découvertes récentes en biologie de l’évolution et en psychologie évolutive de l’organisme humain. La majeure partie de notre savoir est implicite et inconsciente. Elle n’est pas représentée formellement, mais constitue un savoir incarné, qui s’acquiert par l’action et se comprend en interaction corporelle avec nos niches écologiques et nos environnements sociaux. Cela n’est pas seulement vrai pour nos aptitudes élémentaires, mais aussi pour nos facultés supérieures de catégorisation, de langage et de pensée abstraite. Science cognitive incarnée, l’intelligence artificielle, ainsi que la robotique, tentent de construire un intellect incarné en évolution artificielle. Du point de vue philosophique, il est admirable de voir à quel point les nouvelles idées d’intellect incarné et de robotique sont ancrées dans la philosophie du XXe siècle
    • …
    corecore