231 research outputs found

    Brain Computations and Connectivity [2nd edition]

    Get PDF
    This is an open access title available under the terms of a CC BY-NC-ND 4.0 International licence. It is free to read on the Oxford Academic platform and offered as a free PDF download from OUP and selected open access locations. Brain Computations and Connectivity is about how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed. The aim of this book is to elucidate what is computed in different brain systems; and to describe current biologically plausible computational approaches and models of how each of these brain systems computes. Understanding the brain in this way has enormous potential for understanding ourselves better in health and in disease. Potential applications of this understanding are to the treatment of the brain in disease; and to artificial intelligence which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions. This book is pioneering in taking this approach to brain function: to consider what is computed by many of our brain systems; and how it is computed, and updates by much new evidence including the connectivity of the human brain the earlier book: Rolls (2021) Brain Computations: What and How, Oxford University Press. Brain Computations and Connectivity will be of interest to all scientists interested in brain function and how the brain works, whether they are from neuroscience, or from medical sciences including neurology and psychiatry, or from the area of computational science including machine learning and artificial intelligence, or from areas such as theoretical physics

    Evolution from the ground up with Amee – From basic concepts to explorative modeling

    Get PDF
    Evolutionary theory has been the foundation of biological research for about a century now, yet over the past few decades, new discoveries and theoretical advances have rapidly transformed our understanding of the evolutionary process. Foremost among them are evolutionary developmental biology, epigenetic inheritance, and various forms of evolu- tionarily relevant phenotypic plasticity, as well as cultural evolution, which ultimately led to the conceptualization of an extended evolutionary synthesis. Starting from abstract principles rooted in complexity theory, this thesis aims to provide a unified conceptual understanding of any kind of evolution, biological or otherwise. This is used in the second part to develop Amee, an agent-based model that unifies development, niche construction, and phenotypic plasticity with natural selection based on a simulated ecology. Amee is implemented in Utopia, which allows performant, integrated implementation and simulation of arbitrary agent-based models. A phenomenological overview over Amee’s capabilities is provided, ranging from the evolution of ecospecies down to the evolution of metabolic networks and up to beyond-species-level biological organization, all of which emerges autonomously from the basic dynamics. The interaction of development, plasticity, and niche construction has been investigated, and it has been shown that while expected natural phenomena can, in principle, arise, the accessible simulation time and system size are too small to produce natural evo-devo phenomena and –structures. Amee thus can be used to simulate the evolution of a wide variety of processes

    Layered Cellular Automata

    Full text link
    Layered Cellular Automata (LCA) extends the concept of traditional cellular automata (CA) to model complex systems and phenomena. In LCA, each cell's next state is determined by the interaction of two layers of computation, allowing for more dynamic and realistic simulations. This thesis explores the design, dynamics, and applications of LCA, with a focus on its potential in pattern recognition and classification. The research begins by introducing the limitations of traditional CA in capturing the complexity of real-world systems. It then presents the concept of LCA, where layer 0 corresponds to a predefined model, and layer 1 represents the proposed model with additional influence. The interlayer rules, denoted as f and g, enable interactions not only from adjacent neighboring cells but also from some far-away neighboring cells, capturing long-range dependencies. The thesis explores various LCA models, including those based on averaging, maximization, minimization, and modified ECA neighborhoods. Additionally, the implementation of LCA on the 2-D cellular automaton Game of Life is discussed, showcasing intriguing patterns and behaviors. Through extensive experiments, the dynamics of different LCA models are analyzed, revealing their sensitivity to rule changes and block size variations. Convergent LCAs, which converge to fixed points from any initial configuration, are identified and used to design a two-class pattern classifier. Comparative evaluations demonstrate the competitive performance of the LCA-based classifier against existing algorithms. Theoretical analysis of LCA properties contributes to a deeper understanding of its computational capabilities and behaviors. The research also suggests potential future directions, such as exploring advanced LCA models, higher-dimensional simulations, and hybrid approaches integrating LCA with other computational models.Comment: This thesis represents the culmination of my M.Tech research, conducted under the guidance of Dr. Sukanta Das, Associate Professor at the Department of Information Technology, Indian Institute of Engineering Science and Technology, Shibpur, West Bengal, India. arXiv admin note: substantial text overlap with arXiv:2210.13971 by other author

    Toward a formal theory for computing machines made out of whatever physics offers: extended version

    Full text link
    Approaching limitations of digital computing technologies have spurred research in neuromorphic and other unconventional approaches to computing. Here we argue that if we want to systematically engineer computing systems that are based on unconventional physical effects, we need guidance from a formal theory that is different from the symbolic-algorithmic theory of today's computer science textbooks. We propose a general strategy for developing such a theory, and within that general view, a specific approach that we call "fluent computing". In contrast to Turing, who modeled computing processes from a top-down perspective as symbolic reasoning, we adopt the scientific paradigm of physics and model physical computing systems bottom-up by formalizing what can ultimately be measured in any physical substrate. This leads to an understanding of computing as the structuring of processes, while classical models of computing systems describe the processing of structures.Comment: 76 pages. This is an extended version of a perspective article with the same title that will appear in Nature Communications soon after this manuscript goes public on arxi

    Activity Report 2022

    Get PDF

    2001 July, University of Memphis bulletin

    Get PDF
    Vol. 88 of the University of Memphis bulletin containing the graduate catalog for 2001-2003.https://digitalcommons.memphis.edu/speccoll-ua-pub-bulletins/1189/thumbnail.jp

    1996 July, University of Memphis bulletin

    Get PDF
    Vol. 85, No. 4 of the University of Memphis bulletin containing the graduate catalog for 1996-97, 1996 July.https://digitalcommons.memphis.edu/speccoll-ua-pub-bulletins/1183/thumbnail.jp

    2001-2003, University of Memphis bulletin

    Get PDF
    University of Memphis bulletin containing the graduate catalog for 2001-2003.https://digitalcommons.memphis.edu/speccoll-ua-pub-bulletins/1423/thumbnail.jp

    On the connection of probabilistic model checking, planning, and learning for system verification

    Get PDF
    This thesis presents approaches using techniques from the model checking, planning, and learning community to make systems more reliable and perspicuous. First, two heuristic search and dynamic programming algorithms are adapted to be able to check extremal reachability probabilities, expected accumulated rewards, and their bounded versions, on general Markov decision processes (MDPs). Thereby, the problem space originally solvable by these algorithms is enlarged considerably. Correctness and optimality proofs for the adapted algorithms are given, and in a comprehensive case study on established benchmarks it is shown that the implementation, called Modysh, is competitive with state-of-the-art model checkers and even outperforms them on very large state spaces. Second, Deep Statistical Model Checking (DSMC) is introduced, usable for quality assessment and learning pipeline analysis of systems incorporating trained decision-making agents, like neural networks (NNs). The idea of DSMC is to use statistical model checking to assess NNs resolving nondeterminism in systems modeled as MDPs. The versatility of DSMC is exemplified in a number of case studies on Racetrack, an MDP benchmark designed for this purpose, flexibly modeling the autonomous driving challenge. In a comprehensive scalability study it is demonstrated that DSMC is a lightweight technique tackling the complexity of NN analysis in combination with the state space explosion problem.Diese Arbeit präsentiert Ansätze, die Techniken aus dem Model Checking, Planning und Learning Bereich verwenden, um Systeme verlässlicher und klarer verständlich zu machen. Zuerst werden zwei Algorithmen für heuristische Suche und dynamisches Programmieren angepasst, um Extremwerte für Erreichbarkeitswahrscheinlichkeiten, Erwartungswerte für Kosten und beschränkte Varianten davon, auf generellen Markov Entscheidungsprozessen (MDPs) zu untersuchen. Damit wird der Problemraum, der ursprünglich mit diesen Algorithmen gelöst wurde, deutlich erweitert. Korrektheits- und Optimalitätsbeweise für die angepassten Algorithmen werden gegeben und in einer umfassenden Fallstudie wird gezeigt, dass die Implementierung, namens Modysh, konkurrenzfähig mit den modernsten Model Checkern ist und deren Leistung auf sehr großen Zustandsräumen sogar übertrifft. Als Zweites wird Deep Statistical Model Checking (DSMC) für die Qualitätsbewertung und Lernanalyse von Systemen mit integrierten trainierten Entscheidungsgenten, wie z.B. neuronalen Netzen (NN), eingeführt. Die Idee von DSMC ist es, statistisches Model Checking zur Bewertung von NNs zu nutzen, die Nichtdeterminismus in Systemen, die als MDPs modelliert sind, auflösen. Die Vielseitigkeit des Ansatzes wird in mehreren Fallbeispielen auf Racetrack gezeigt, einer MDP Benchmark, die zu diesem Zweck entwickelt wurde und die Herausforderung des autonomen Fahrens flexibel modelliert. In einer umfassenden Skalierbarkeitsstudie wird demonstriert, dass DSMC eine leichtgewichtige Technik ist, die die Komplexität der NN-Analyse in Kombination mit dem State Space Explosion Problem bewältigt
    • …
    corecore