90 research outputs found

    El mundo de las ciencias de la complejidad

    Get PDF
    La situación es verdaderamente apasionante. Mientras que en el mundo llamado real –y entonces se hace referencia a dominios como la política, la economía, los conflictos militares y sociales, por ejemplo–, la percepción natural –digamos: de los medios y la opinión pública– es que el país y el mundo se encuentran en condiciones difíciles; en algunos casos, dramática; y en muchas ocasiones trágica, en el campo del progreso del conocimiento asistimos a una magnífica vitalidad. Esta vitalidad se expresa en la ciencia de punta y, notablemente, en las ciencias de la complejidad. Mientras que la ciencia normal –para volver a la expresión de Kuhn– se encuentra literalmente a la defensiva en numerosos campos, temas y problemas –digamos, a la defensiva con respecto al decurso de los acontecimientos y a las dinámicas del mundo contemporáneo–, en el contexto del estudio de los sistemas complejos adaptativos asistimos a una vitalidad que es prácticamente desconocida para la corriente principal de académicos –independientemente de los niveles en los que trabajan–, de científicos, de administradores de educación y de ciencia y tecnología (por ejemplo rectores, vicerrectores, decanos, directores de departamentos, tomadores de decisión, políticos y gobernantes). La corriente principal del conocimiento (mainstream) desconoce una circunstancia, un proceso, una dinámica que sí es conocida por parte de quienes trabajan e investigan activamente en el campo de las ciencias de la complejidad

    A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning

    Full text link
    Reservoir computing (RC), first applied to temporal signal processing, is a recurrent neural network in which neurons are randomly connected. Once initialized, the connection strengths remain unchanged. Such a simple structure turns RC into a non-linear dynamical system that maps low-dimensional inputs into a high-dimensional space. The model's rich dynamics, linear separability, and memory capacity then enable a simple linear readout to generate adequate responses for various applications. RC spans areas far beyond machine learning, since it has been shown that the complex dynamics can be realized in various physical hardware implementations and biological devices. This yields greater flexibility and shorter computation time. Moreover, the neuronal responses triggered by the model's dynamics shed light on understanding brain mechanisms that also exploit similar dynamical processes. While the literature on RC is vast and fragmented, here we conduct a unified review of RC's recent developments from machine learning to physics, biology, and neuroscience. We first review the early RC models, and then survey the state-of-the-art models and their applications. We further introduce studies on modeling the brain's mechanisms by RC. Finally, we offer new perspectives on RC development, including reservoir design, coding frameworks unification, physical RC implementations, and interaction between RC, cognitive neuroscience and evolution.Comment: 51 pages, 19 figures, IEEE Acces

    Highly Parallel Geometric Characterization and Visualization of Volumetric Data Sets

    Get PDF
    Volumetric 3D data sets are being generated in many different application areas. Some examples are CAT scans and MRI data, 3D models of protein molecules represented by implicit surfaces, multi-dimensional numeric simulations of plasma turbulence, and stacks of confocal microscopy images of cells. The size of these data sets has been increasing, requiring the speed of analysis and visualization techniques to also increase to keep up. Recent advances in processor technology have stopped increasing clock speed and instead begun increasing parallelism, resulting in multi-core CPUS and many-core GPUs. To take advantage of these new parallel architectures, algorithms must be explicitly written to exploit parallelism. In this thesis we describe several algorithms and techniques for volumetric data set analysis and visualization that are amenable to these modern parallel architectures. We first discuss modeling volumetric data with Gaussian Radial Basis Functions (RBFs). RBF representation of a data set has several advantages, including lossy compression, analytic differentiability, and analytic application of Gaussian blur. We also describe a parallel volume rendering algorithm that can create images of the data directly from the RBF representation. Next we discuss a parallel, stochastic algorithm for measuring the surface area of volumetric representations of molecules. The algorithm is suitable for implementation on a GPU and is also progressive, allowing it to return a rough answer almost immediately and refine the answer over time to the desired level of accuracy. After this we discuss the concept of Confluent Visualization, which allows the visualization of the interaction between a pair of volumetric data sets. The interaction is visualized through volume rendering, which is well suited to implementation on parallel architectures. Finally we discuss a parallel, stochastic algorithm for classifying stem cells as having been grown on a surface that induces differentiation or on a surface that does not induce differentiation. The algorithm takes as input 3D volumetric models of the cells generated from confocal microscopy. This algorithm builds on our algorithm for surface area measurement and, like that algorithm, this algorithm is also suitable for implementation on a GPU and is progressive

    Neuromorphic Engineering Editors' Pick 2021

    Get PDF
    This collection showcases well-received spontaneous articles from the past couple of years, which have been specially handpicked by our Chief Editors, Profs. André van Schaik and Bernabé Linares-Barranco. The work presented here highlights the broad diversity of research performed across the section and aims to put a spotlight on the main areas of interest. All research presented here displays strong advances in theory, experiment, and methodology with applications to compelling problems. This collection aims to further support Frontiers’ strong community by recognizing highly deserving authors

    Simulation and Design of Biological and Biologically-Motivated Computing Systems

    Get PDF
    In life science, there is a great need in understandings of biological systems for therapeutics, synthetic biology, and biomedical applications. However, complex behaviors and dynamics of biological systems are hard to understand and design. In the mean time, the design of traditional computer architectures faces challenges from power consumption, device reliability, and process variations. In recent years, the convergence of computer science, computer engineering and life science has enabled new applications targeting the challenges from both engineering and biological fields. On one hand, computer modeling and simulation provides quantitative analysis and predictions of functions and behaviors of biological systems, and further facilitates the design of synthetic biological systems. On the other hand, bio-inspired devices and systems are designed for real world applications by mimicking biological functions and behaviors. This dissertation develops techniques for modeling and analyzing dynamic behaviors of biologically realistic genetic circuits and brain models and design of brain-inspired computing systems. The stability of genetic memory circuits is studied to understand its functions for its potential applications in synthetic biology. Based on the electrical-equivalent models of biochemical reactions, simulation techniques widely used for electronic systems are applied to provide quantitative analysis capabilities. In particular, system-theoretical techniques are used to study the dynamic behaviors of genetic memory circuits, where the notion of stability boundary is employed to characterize the bistability of such circuits. To facilitate the simulation-based studies of physiological and pathological behaviors in brain disorders, we construct large-scale brain models with detailed cellular mechanisms. By developing dedicated numerical techniques for brain simulation, the simulation speed is greatly improved such that dynamic simulation of large thalamocortical models with more than one million multi-compartment neurons and hundreds of synapses on commodity computer servers becomes feasible. Simulation of such large model produces biologically meaningful results demonstrating the emergence of sigma and delta waves in the early and deep stages of sleep, and suggesting the underlying cellular mechanisms that may be responsible for generation of absence seizure. Brain-inspired computing paradigms may offer promising solutions to many challenges facing the main stream Von Neumann computer architecture. To this end, we develop a biologically inspired learning system amenable to VLSI implementation. The proposed solution consists of a digitized liquid state machine (LSM) and a spike-based learning rule, providing a fully biologically inspired learning paradigm. The key design parameters of this liquid state machine are optimized to maximize the learning performance while considering hardware implementation cost. When applied to speech recognition of isolated word using TI46 speech corpus, the performance of the proposed LSM rivals several existing state-of-art techniques including the Hidden Markov Model based recognizer Sphinx-4

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community

    Infiltration from the pedon to global grid scales: an overview and outlook for land surface modelling

    Get PDF
    Infiltration in soils is a key process that partitions precipitation at the land surface in surface runoff and water that enters the soil profile. We reviewed the basic principles of water infiltration in soils and we analyzed approaches commonly used in Land Surface Models (LSMs) to quantify infiltration as well as its numerical implementation and sensitivity to model parameters. We reviewed methods to upscale infiltration from the point to the field, hill slope, and grid cell scale of LSMs. Despite the progress that has been made, upscaling of local scale infiltration processes to the grid scale used in LSMs is still far from being treated rigorously. We still lack a consistent theoretical framework to predict effective fluxes and parameters that control infiltration in LSMs. Our analysis shows, that there is a large variety in approaches used to estimate soil hydraulic properties. Novel, highly resolved soil information at higher resolutions than the grid scale of LSMs may help in better quantifying subgrid variability of key infiltration parameters. Currently, only a few land surface models consider the impact of soil structure on soil hydraulic properties. Finally, we identified several processes not yet considered in LSMs that are known to strongly influence infiltration. Especially, the impact of soil structure on infiltration requires further research. In order to tackle the above challenges and integrate current knowledge on soil processes affecting infiltration processes on land surface models, we advocate a stronger exchange and scientific interaction between the soil and the land surface modelling communities
    corecore