85,501 research outputs found

    Classical Computational Models

    Get PDF

    Neurobiology, Psychophysics, and Computational Models of Visual Attention

    Get PDF
    The purpose of this workshop was to discuss both recent experimental findings and computational models of the neurobiological implementation of selective attention. Recent experimental results were presented in two of the four presentations given (C.E. Connor, Washington University and B.C. Motter, SUNY and V.A. Medical Center, Syracuse), while the other two talks were devoted to computational models (E. Niebur, Caltech, and B. Olshausen, Washington University)

    Computational Models of Adult Neurogenesis

    Full text link
    Experimental results in recent years have shown that adult neurogenesis is a significant phenomenon in the mammalian brain. Little is known, however, about the functional role played by the generation and destruction of neurons in the context of and adult brain. Here we propose two models where new projection neurons are incorporated. We show that in both models, using incorporation and removal of neurons as a computational tool, it is possible to achieve a higher computational efficiency that in purely static, synapse-learning driven networks. We also discuss the implication for understanding the role of adult neurogenesis in specific brain areas.Comment: To appear Physica A, 7 page

    Computational Models for Transplant Biomarker Discovery.

    Get PDF
    Translational medicine offers a rich promise for improved diagnostics and drug discovery for biomedical research in the field of transplantation, where continued unmet diagnostic and therapeutic needs persist. Current advent of genomics and proteomics profiling called "omics" provides new resources to develop novel biomarkers for clinical routine. Establishing such a marker system heavily depends on appropriate applications of computational algorithms and software, which are basically based on mathematical theories and models. Understanding these theories would help to apply appropriate algorithms to ensure biomarker systems successful. Here, we review the key advances in theories and mathematical models relevant to transplant biomarker developments. Advantages and limitations inherent inside these models are discussed. The principles of key -computational approaches for selecting efficiently the best subset of biomarkers from high--dimensional omics data are highlighted. Prediction models are also introduced, and the integration of multi-microarray data is also discussed. Appreciating these key advances would help to accelerate the development of clinically reliable biomarker systems

    Computational models for inferring biochemical networks

    Get PDF
    Biochemical networks are of great practical importance. The interaction of biological compounds in cells has been enforced to a proper understanding by the numerous bioinformatics projects, which contributed to a vast amount of biological information. The construction of biochemical systems (systems of chemical reactions), which include both topology and kinetic constants of the chemical reactions, is NP-hard and is a well-studied system biology problem. In this paper, we propose a hybrid architecture, which combines genetic programming and simulated annealing in order to generate and optimize both the topology (the network) and the reaction rates of a biochemical system. Simulations and analysis of an artificial model and three real models (two models and the noisy version of one of them) show promising results for the proposed method.The Romanian National Authority for Scientific Research, CNDI–UEFISCDI, Project No. PN-II-PT-PCCA-2011-3.2-0917

    Developing reproducible and comprehensible computational models

    Get PDF
    Quantitative predictions for complex scientific theories are often obtained by running simulations on computational models. In order for a theory to meet with wide-spread acceptance, it is important that the model be reproducible and comprehensible by independent researchers. However, the complexity of computational models can make the task of replication all but impossible. Previous authors have suggested that computer models should be developed using high-level specification languages or large amounts of documentation. We argue that neither suggestion is sufficient, as each deals with the prescriptive definition of the model, and does not aid in generalising the use of the model to new contexts. Instead, we argue that a computational model should be released as three components: (a) a well-documented implementation; (b) a set of tests illustrating each of the key processes within the model; and (c) a set of canonical results, for reproducing the model’s predictions in important experiments. The included tests and experiments would provide the concrete exemplars required for easier comprehension of the model, as well as a confirmation that independent implementations and later versions reproduce the theory’s canonical results
    • …
    corecore