9,938 research outputs found

    Hebbian Wiring Plasticity Generates Efficient Network Structures for Robust Inference with Synaptic Weight Plasticity

    Get PDF
    In the adult mammalian cortex, a small fraction of spines are created and eliminated every day, and the resultant synaptic connection structure is highly nonrandom, even in local circuits. However, it remains unknown whether a particular synaptic connection structure is functionally advantageous in local circuits, and why creation and elimination of synaptic connections is necessary in addition to rich synaptic weight plasticity. To answer these questions, we studied an inference task model through theoretical and numerical analyses. We demonstrate that a robustly beneficial network structure naturally emerges by combining Hebbian-type synaptic weight plasticity and wiring plasticity. Especially in a sparsely connected network, wiring plasticity achieves reliable computation by enabling efficient information transmission. Furthermore, the proposed rule reproduces experimental observed correlation between spine dynamics and task performance

    Synthetic in vitro transcriptional oscillators

    Get PDF
    The construction of synthetic biochemical circuits from simple components illuminates how complex behaviors can arise in chemistry and builds a foundation for future biological technologies. A simplified analog of genetic regulatory networks, in vitro transcriptional circuits, provides a modular platform for the systematic construction of arbitrary circuits and requires only two essential enzymes, bacteriophage T7 RNA polymerase and Escherichia coli ribonuclease H, to produce and degrade RNA signals. In this study, we design and experimentally demonstrate three transcriptional oscillators in vitro. First, a negative feedback oscillator comprising two switches, regulated by excitatory and inhibitory RNA signals, showed up to five complete cycles. To demonstrate modularity and to explore the design space further, a positive-feedback loop was added that modulates and extends the oscillatory regime. Finally, a three-switch ring oscillator was constructed and analyzed. Mathematical modeling guided the design process, identified experimental conditions likely to yield oscillations, and explained the system's robust response to interference by short degradation products. Synthetic transcriptional oscillators could prove valuable for systematic exploration of biochemical circuit design principles and for controlling nanoscale devices and orchestrating processes within artificial cells

    Multilevel Converters: An Enabling Technology for High-Power Applications

    Get PDF
    | Multilevel converters are considered today as the state-of-the-art power-conversion systems for high-power and power-quality demanding applications. This paper presents a tutorial on this technology, covering the operating principle and the different power circuit topologies, modulation methods, technical issues and industry applications. Special attention is given to established technology already found in industry with more in-depth and self-contained information, while recent advances and state-of-the-art contributions are addressed with useful references. This paper serves as an introduction to the subject for the not-familiarized reader, as well as an update or reference for academics and practicing engineers working in the field of industrial and power electronics.Ministerio de Ciencia y Tecnología DPI2001-3089Ministerio de Eduación y Ciencia d TEC2006-0386

    Session 5: Development, Neuroscience and Evolutionary Psychology

    Get PDF
    Proceedings of the Pittsburgh Workshop in History and Philosophy of Biology, Center for Philosophy of Science, University of Pittsburgh, March 23-24 2001 Session 5: Development, Neuroscience and Evolutionary Psycholog

    Computing with bacterial constituents, cells and populations: from bioputing to bactoputing

    Get PDF
    The relevance of biological materials and processes to computing—aliasbioputing—has been explored for decades. These materials include DNA, RNA and proteins, while the processes include transcription, translation, signal transduction and regulation. Recently, the use of bacteria themselves as living computers has been explored but this use generally falls within the classical paradigm of computing. Computer scientists, however, have a variety of problems to which they seek solutions, while microbiologists are having new insights into the problems bacteria are solving and how they are solving them. Here, we envisage that bacteria might be used for new sorts of computing. These could be based on the capacity of bacteria to grow, move and adapt to a myriad different fickle environments both as individuals and as populations of bacteria plus bacteriophage. New principles might be based on the way that bacteria explore phenotype space via hyperstructure dynamics and the fundamental nature of the cell cycle. This computing might even extend to developing a high level language appropriate to using populations of bacteria and bacteriophage. Here, we offer a speculative tour of what we term bactoputing, namely the use of the natural behaviour of bacteria for calculating

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA

    Selection, tinkering and emergence in complex networks: crossing the land of tinkering

    Get PDF
    Complex biological networks have very different origins than technologic ones. The latter involve extensive design and, as engineered structures, include a high level of optimization. The former involve (in principle) contingency and structural constraints, with new structures being incorporated through tinkering with previously evolved modules or units. However, the observation of the topological features of different biological nets suggests that nature can have a limited repertoire of ”attractors” that essentially optimize communication under some basic constraints of cost and architecture or that allow the biological nets to reach a high degree of homeostasis. Conversely, the topological features exhibited by some technology graphs indicate that tinkering and internal constraints play a key role, in spite of the ”designed” nature of these structures. Previous scenarios suggested to explain the overall trends of evolution are re-analyzed in light of topological patterns.Peer ReviewedPostprint (author's final draft

    Autonomously Reconfigurable Artificial Neural Network on a Chip

    Get PDF
    Artificial neural network (ANN), an established bio-inspired computing paradigm, has proved very effective in a variety of real-world problems and particularly useful for various emerging biomedical applications using specialized ANN hardware. Unfortunately, these ANN-based systems are increasingly vulnerable to both transient and permanent faults due to unrelenting advances in CMOS technology scaling, which sometimes can be catastrophic. The considerable resource and energy consumption and the lack of dynamic adaptability make conventional fault-tolerant techniques unsuitable for future portable medical solutions. Inspired by the self-healing and self-recovery mechanisms of human nervous system, this research seeks to address reliability issues of ANN-based hardware by proposing an Autonomously Reconfigurable Artificial Neural Network (ARANN) architectural framework. Leveraging the homogeneous structural characteristics of neural networks, ARANN is capable of adapting its structures and operations, both algorithmically and microarchitecturally, to react to unexpected neuron failures. Specifically, we propose three key techniques --- Distributed ANN, Decoupled Virtual-to-Physical Neuron Mapping, and Dual-Layer Synchronization --- to achieve cost-effective structural adaptation and ensure accurate system recovery. Moreover, an ARANN-enabled self-optimizing workflow is presented to adaptively explore a "Pareto-optimal" neural network structure for a given application, on the fly. Implemented and demonstrated on a Virtex-5 FPGA, ARANN can cover and adapt 93% chip area (neurons) with less than 1% chip overhead and O(n) reconfiguration latency. A detailed performance analysis has been completed based on various recovery scenarios

    Combined optimization algorithms applied to pattern classification

    Get PDF
    Accurate classification by minimizing the error on test samples is the main goal in pattern classification. Combinatorial optimization is a well-known method for solving minimization problems, however, only a few examples of classifiers axe described in the literature where combinatorial optimization is used in pattern classification. Recently, there has been a growing interest in combining classifiers and improving the consensus of results for a greater accuracy. In the light of the "No Ree Lunch Theorems", we analyse the combination of simulated annealing, a powerful combinatorial optimization method that produces high quality results, with the classical perceptron algorithm. This combination is called LSA machine. Our analysis aims at finding paradigms for problem-dependent parameter settings that ensure high classifica, tion results. Our computational experiments on a large number of benchmark problems lead to results that either outperform or axe at least competitive to results published in the literature. Apart from paxameter settings, our analysis focuses on a difficult problem in computation theory, namely the network complexity problem. The depth vs size problem of neural networks is one of the hardest problems in theoretical computing, with very little progress over the past decades. In order to investigate this problem, we introduce a new recursive learning method for training hidden layers in constant depth circuits. Our findings make contributions to a) the field of Machine Learning, as the proposed method is applicable in training feedforward neural networks, and to b) the field of circuit complexity by proposing an upper bound for the number of hidden units sufficient to achieve a high classification rate. One of the major findings of our research is that the size of the network can be bounded by the input size of the problem and an approximate upper bound of 8 + √2n/n threshold gates as being sufficient for a small error rate, where n := log/SL and SL is the training set
    corecore