182 research outputs found

    Toward a formal theory for computing machines made out of whatever physics offers: extended version

    Full text link
    Approaching limitations of digital computing technologies have spurred research in neuromorphic and other unconventional approaches to computing. Here we argue that if we want to systematically engineer computing systems that are based on unconventional physical effects, we need guidance from a formal theory that is different from the symbolic-algorithmic theory of today's computer science textbooks. We propose a general strategy for developing such a theory, and within that general view, a specific approach that we call "fluent computing". In contrast to Turing, who modeled computing processes from a top-down perspective as symbolic reasoning, we adopt the scientific paradigm of physics and model physical computing systems bottom-up by formalizing what can ultimately be measured in any physical substrate. This leads to an understanding of computing as the structuring of processes, while classical models of computing systems describe the processing of structures.Comment: 76 pages. This is an extended version of a perspective article with the same title that will appear in Nature Communications soon after this manuscript goes public on arxi

    Physics of traffic Gridlock in a city: a study of the spreading of traffic jams on urban street networks

    Get PDF
    Abstract. Traffic congestion has profound and varied impacts on modern society, yet characterizing on a city scale the transition that gives rise to the congestion remains an elusive task. The challenge lies in understanding the role of the interplay between topology and spatial dynamics in this traffic phenomenon. In this thesis we combine cellular automata modelling with analysis tools from statistical physics to study the emergence of congestions at road (street), grid (neighbourhood) and network (city) levels. At street level, we shown for at least two traffic cellular automata that implementing a simple Monte Carlo exploration of the driving rules reproduces the fundamental diagram of a single road segment. Next, by applying tools of percolation theory, we unveiled the underlying mechanism of jamming process in the Biham-Middleton Levine model, i.e., a paradigmatic model for car traffic, both on square and honeycomb grids, solving a puzzle of more than a decade on the origin of the intermediate states of this model on square grids and pointing out the relevance of both asymmetry and the underlying grid on the model's behaviour. Finally, we used the origin-destination matrices obtained from mobile phone data to simulate car by car the traffic on the detailed road network of five large cities: Rio, Boston, San Francisco bay, Porto and Lisbon. We found at this network level that the characteristic recovery time the system takes to unload is proportional to the fraction of road infrastructure being used and the mean travel time on all trips. In addition, we study the emergence of congestion when the number of cars increases by keeping the trip distributions and street capacities unchanged. Our last findings strongly support the notion that the transitions to urban traffic gridlock resemble the direct percolation universality class and can be approached with the framework of non-equilibrium phase transitions. Our work illustrates the power of a computational description at the level of each car with the solid theoretical framework of statistical physics to analyze the origins and behaviour or vehicular traffic congestion.La congestión vehícular tiene un impacto profundo y diverso en la sociedad actual. Sin embargo, caracterizar a nivel de ciudad la transición que da lugar a la congestión ha sido una tarea inalcanzable. El problema se centra en la dificultad para entender la interacción que existe entre la topología de la red y la dinámica espacial del flujo vehícular. En esta tesis se combina el modelamiento por autómatas celulares con herramientas de la física estadística para estudiar la formación de la congestión a escala de calle, de malla regular y de red real de calles. A nivel de calle, mostramos que para al menos dos modelos de autómata celular, una exploración por Monte Carlo de las reglas de manejo permite reproducir el diagrama fundamental de una calle. A nivel de mallas regulares, el modelo Biham-Middleton-Levine (BML) es el paradigma de los estudios de tráfico vehícular. Aplicando las herramientas de la Teor\'ia de Percolación, logramos desentrañar los mecanismos de formación de atascos en este modelo, ya sea sobre mallas cuadradas o tipo panal. Logramos entonces resolver el misterio del origen los llamados estados intermedios en el BML en mallas cuadradas. Finalmente, usamos las matrices origen destino obtenidas a partir de datos de telefonía móvil para simular el tráfico vehícular de cinco ciudades alrededor del mundo: Río de Janeiro, Boston, la bahía de San Francisco, Porto y Lisboa. A este nivel de red de ciudad, encontramos que el tiempo de recuperación característico de cada ciudad es proporcional a la fracción de infraestructura utilizada y el tiempo promedio de viaje. Adicionalmente, incrementando la demanda, estudiamos el colapso vehícular en redes de ciudad bajo el marco de transiciones de fase fuera del equilibrio. Nuestros resultados muestran características similares a las observados en los modelos dentro de la clase de universalidad de percolación dirigida (DP). Nuestro trabajo ilustra cómo una descripción computacional a nivel de vehículo junto con las herramientas de la física estadística permite analizar y comprender los orígenes y el comportamiento de la congestión vehícular.Doctorad

    Marker and source-marker reprogramming of Most Permissive Boolean networks and ensembles with BoNesis

    Get PDF
    Boolean networks (BNs) are discrete dynamical systems with applications to the modeling of cellular behaviors. In this paper, we demonstrate how the software BoNesis can be employed to exhaustively identify combinations of perturbations which enforce properties on their fixed points and attractors. We consider marker properties, which specify that some components are fixed to a specific value. We study 4 variants of the marker reprogramming problem: the reprogramming of fixed points, of minimal trap spaces, and of fixed points and minimal trap spaces reachable from a given initial configuration with the most permissive update mode. The perturbations consist of fixing a set of components to a fixed value. They can destroy and create new attractors. In each case, we give an upper bound on their theoretical computational complexity, and give an implementation of the resolution using the BoNesis Python framework. Finally, we lift the reprogramming problems to ensembles of BNs, as supported by BoNesis, bringing insight on possible and universal reprogramming strategies. This paper can be executed and modified interactively.Comment: Notebook available at https://nbviewer.org/github/bnediction/reprogramming-with-bonesis/blob/release/paper.ipyn

    Lattice-gas cellular automata for the analysis of cancer invasion

    Get PDF
    Cancer cells display characteristic traits acquired in a step-wise manner during carcinogenesis. Some of these traits are autonomous growth, induction of angiogenesis, invasion and metastasis. In this thesis, the focus is on one of the latest stages of tumor progression, tumor invasion. Tumor invasion emerges from the combined effect of tumor cell-cell and cell-microenvironment interactions, which can be studied with the help of mathematical analysis. Cellular automata (CA) can be viewed as simple models of self-organizing complex systems in which collective behavior can emerge out of an ensemble of many interacting "simple" components. In particular, we focus on an important class of CA, the so-called lattice-gas cellular automata (LGCA). In contrast to traditional CA, LGCA provide a straightforward and intuitive implementation of particle transport and interactions. Additionally, the structure of LGCA facilitates the mathematical analysis of their behavior. Here, the principal tools of mathematical analysis of LGCA are the mean-field approximation and the corresponding Lattice Boltzmann equation. The main objective of this thesis is to investigate important aspects of tumor invasion, under the microscope of mathematical modeling and analysis: Impact of the tumor environment: We introduce a LGCA as a microscopic model of tumor cell migration together with a mathematical description of different tumor environments. We study the impact of the various tumor environments (such as extracellular matrix) on tumor cell migration by estimating the tumor cell dispersion speed for a given environment. Effect of tumor cell proliferation and migration: We study the effect of tumor cell proliferation and migration on the tumor’s invasive behavior by developing a simplified LGCA model of tumor growth. In particular, we derive the corresponding macroscopic dynamics and we calculate the tumor’s invasion speed in terms of tumor cell proliferation and migration rates. Moreover, we calculate the width of the invasive zone, where the majority of mitotic activity is concentrated, and it is found to be proportional to the invasion speed. Mechanisms of tumor invasion emergence: We investigate the mechanisms for the emergence of tumor invasion in the course of cancer progression. We conclude that the response of a microscopic intracellular mechanism (migration/proliferation dichotomy) to oxygen shortage, i.e. hypoxia, maybe responsible for the transition from a benign (proliferative) to a malignant (invasive) tumor. Computing in vivo tumor invasion: Finally, we propose an evolutionary algorithm that estimates the parameters of a tumor growth LGCA model based on time-series of patient medical data (in particular Magnetic Resonance and Diffusion Tensor Imaging data). These parameters may allow to reproduce clinically relevant tumor growth scenarios for a specific patient, providing a prediction of the tumor growth at a later time stage.Krebszellen zeigen charakteristische Merkmale, die sie in einem schrittweisen Vorgang während der Karzinogenese erworben haben. Einige dieser Merkmale sind autonomes Wachstum, die Induktion von Angiogenese, Invasion und Metastasis. Der Schwerpunkt dieser Arbeit liegt auf der Tumorinvasion, einer der letzten Phasen der Tumorprogression. Die Tumorinvasion ensteht aus der kombinierten Wirkung von den Wechselwirkungen Tumorzelle-Zelle und Zelle-Mikroumgebung, die mit die Hilfe von mathematischer Analyse untersucht werden können. Zelluläre Automaten (CA) können als einfache Modelle von selbst-organisierenden komplexen Systemen betrachtet werden, in denen kollektives Verhalten aus einer Kombination von vielen interagierenden "einfachen" Komponenten entstehen kann. Insbesondere konzentrieren wir uns auf eine wichtige CA-Klasse, die sogenannten Zelluläre Gitter-Gas Automaten (LGCA). Im Gegensatz zu traditionellen CA bieten LGCA eine einfache und intuitive Umsetzung der Teilchen und Wechselwirkungen. Zusätzlich erleichtert die Struktur der LGCA die mathematische Analyse ihres Verhaltens. Die wichtigsten Werkzeuge der mathematischen Analyse der LGCA sind hier die Mean-field Approximation und die entsprechende Lattice - Boltzmann - Gleichung. Das wichtigste Ziel dieser Arbeit ist es, wichtige Aspekte der Tumorinvasion unter dem Mikroskop der mathematischen Modellierung und Analyse zu erforschen: Auswirkungen der Tumorumgebung: Wir stellen einen LGCA als mikroskopisches Modell der Tumorzellen-Migration in Verbindung mit einer mathematischen Beschreibung der verschiedenen Tumorumgebungen vor. Wir untersuchen die Auswirkungen der verschiedenen Tumorumgebungen (z. B. extrazellulären Matrix) auf die Migration von Tumorzellen dürch Schätzung der Tumorzellen-Dispersionsgeschwindigkeit in einem gegebenen Umfeld. Wirkung von Tumor-Zellenproliferation und Migration: Wir untersuchen die Wirkung von Tumorzellenproliferation und Migration auf das invasive Verhalten der Tumorzellen durch die Entwicklung eines vereinfachten LGCA Tumorwachstumsmodells. Wir leiten die entsprechende makroskopische Dynamik und berechnen die Tumorinvasionsgeschwindigkeit im Hinblick auf die Tumorzellenproliferation- und Migrationswerte. Darüber hinaus berechnen wir die Breite der invasiven Zone, wo die Mehrheit der mitotischer Aktivität konzentriert ist, und es wird festgestellt, dass diese proportional zu den Invasionsgeschwindigkeit ist. Mechanismen der Tumorinvasion Entstehung: Wir untersuchen Mechanismen, die für die Entstehung von Tumorinvasion im Verlauf des Krebs zuständig sind. Wir kommen zu dem Schluss, dass die Reaktion eines mikroskopischen intrazellulären Mechanismus (Migration/Proliferation Dichotomie) zu Sauerstoffmangel, d.h. Hypoxie, möglicheweise für den Übergang von einem gutartigen (proliferative) zu einer bösartigen (invasive) Tumor verantwortlich ist. Berechnung der in-vivo Tumorinvasion: Schließlich schlagen wir einen evolutionären Algorithmus vor, der die Parameter eines LGCA Modells von Tumorwachstum auf der Grundlage von medizinischen Daten des Patienten für mehrere Zeitpunkte (insbesondere die Magnet-Resonanz-und Diffusion Tensor Imaging Daten) ermöglicht. Diese Parameter erlauben Szenarien für einen klinisch relevanten Tumorwachstum für einen bestimmten Patienten zu reproduzieren, die eine Vorhersage des Tumorwachstums zu einem späteren Zeitpunkt möglich machen

    Homo deceptus: How language creates its own reality

    Get PDF
    Homo deceptus is a book that brings together new ideas on language, consciousness and physics into a comprehensive theory that unifies science and philosophy in a different kind of Theory of Everything. The subject of how we are to make sense of the world is addressed in a structured and ordered manner, which starts with a recognition that scientific truths are constructed within a linguistic framework. The author argues that an epistemic foundation of natural language must be understood before laying claim to any notion of reality. This foundation begins with Ludwig Wittgenstein’s Tractatus Logico-Philosophicus and the relationship of language to formal logic. Ultimately, we arrive at an answer to the question of why people believe the things they do. This is effectively a modification of Alfred Tarski’s semantic theory of truth. The second major issue addressed is the ‘dreaded’ Hard Problem of Consciousness as first stated by David Chalmers in 1995. The solution is found in the unification of consciousness, information theory and notions of physicalism. The physical world is shown to be an isomorphic representation of the phenomenological conscious experience. New concepts in understanding how language operates help to explain why this relationship has been so difficult to appreciate. The inclusion of concepts from information theory shows how a digital mechanics resolves heretofore conflicting theories in physics, cognitive science and linguistics. Scientific orthodoxy is supported, but viewed in a different light. Mainstream science is not challenged, but findings are interpreted in a manner that unifies consciousness without contradiction. Digital mechanics and formal systems of logic play central roles in combining language, consciousness and the physical world into a unified theory where all can be understood within a single consistent framework
    corecore