928 research outputs found

    On total variation approximations for random assemblies

    Get PDF
    We prove a total variation approximation for the distribution of component vector of a weakly logarithmic random assembly. The proof demonstrates an analytic approach based on a comparative analysis of the coefficients of two power series

    Elasticity and disorder in irreversible deformation of materials

    Get PDF

    Synchronization and Noise: A Mechanism for Regularization in Neural Systems

    Full text link
    To learn and reason in the presence of uncertainty, the brain must be capable of imposing some form of regularization. Here we suggest, through theoretical and computational arguments, that the combination of noise with synchronization provides a plausible mechanism for regularization in the nervous system. The functional role of regularization is considered in a general context in which coupled computational systems receive inputs corrupted by correlated noise. Noise on the inputs is shown to impose regularization, and when synchronization upstream induces time-varying correlations across noise variables, the degree of regularization can be calibrated over time. The proposed mechanism is explored first in the context of a simple associative learning problem, and then in the context of a hierarchical sensory coding task. The resulting qualitative behavior coincides with experimental data from visual cortex.Comment: 32 pages, 7 figures. under revie

    The Interplay of Architecture and Correlated Variability in Neuronal Networks

    Get PDF
    This much is certain: neurons are coupled, and they exhibit covariations in their output. The extent of each does not have a single answer. Moreover, the strength of neuronal correlations, in particular, has been a subject of hot debate within the neuroscience community over the past decade, as advancing recording techniques have made available a lot of new, sometimes seemingly conflicting, datasets. The impact of connectivity and the resulting correlations on the ability of animals to perform necessary tasks is even less well understood. In order to answer relevant questions in these categories, novel approaches must be developed. This work focuses on three somewhat distinct, but inseparably coupled, crucial avenues of research within the broader field of computational neuroscience. First, there is a need for tools which can be applied, both by experimentalists and theorists, to understand how networks transform their inputs. In turn, these tools will allow neuroscientists to tease apart the structure which underlies network activity. The Generalized Thinning and Shift framework, presented in Chapter 4, addresses this need. Next, taking for granted a general understanding of network architecture as well as some grasp of the behavior of its individual units, we must be able to reverse the activity to structure relationship, and understand instead how network structure determines dynamics. We achieve this in Chapters 5 through 7 where we present an application of linear response theory yielding an explicit approximation of correlations in integrate--and--fire neuronal networks. This approximation reveals the explicit relationship between correlations, structure, and marginal dynamics. Finally, we must strive to understand the functional impact of network dynamics and architecture on the tasks that a neural network performs. This need motivates our analysis of a biophysically detailed model of the blow fly visual system in Chapter 8. Our hope is that the work presented here represents significant advances in multiple directions within the field of computational neuroscience.Mathematics, Department o

    Probing DNA-Induced Colloidal Interactions and Dynamics with Scanning-Line Optical Tweezers

    Get PDF
    A promising route to forming novel nanoparticle-based materials is directed self-assembly, where the interactions among multiple species of suspended particles are intentionally designed to favor the self-assembly of a specific cluster arrangement or nanostructure. DNA provides a natural tool for directed particle assembly because DNA double helix formation is chemically specific — particles with short single-stranded DNA grafted on their surfaces will be bridged together only if those strands have complementary base sequences. Moreover, the temperature-dependent stability of such DNA bridges allows the resulting attraction to be modulated from negligibly weak to effectively irreversible over a convenient range of temperatures. Surprisingly, existing models for DNA-induced particle interactions are typically in error by more than an order of magnitude, which has hindered efforts to design complex temperature, sequence and time-dependent interactions needed for the most interesting applications. Here we report the first spatially resolved measurements of DNA-induced interactions between pairs of polystyrene microspheres at binding strengths comparable to those used in self-assembly experiments. The pair-interaction energies measured with our optical tweezers instrument can be modeled quantitatively with a conceptually straightforward and numerically tractable model, boding well for their application to direct self-assembly. In addition to understanding the equilibrium interactions between DNA-labeled particles, it is also important to consider the dynamics with which they bind to and unbind from one another. Here we demonstrate for the first time that carefully designed systems of DNA-functionalized particles exhibit effectively diffusion-limited binding, suggesting that these interactions are suitable to direct efficient self-assembly. We systematically explore the transition from diffusion-limited to reaction-limited binding by decreasing the DNA labeling density, and develop a simple dynamic model that is able to reproduce some of the anomalous kinetics observed in multivalent binding processes. Specifically, we find that when compounded, static disorder in the melting rate of single DNA duplexes gives rise to highly non-exponential lifetime distributions in multivalent binding. Together, our findings motivate a nanomaterial design approach where novel functional structures can be found computationally and then reliably realized in experiment

    Simulation of networks of spiking neurons: A review of tools and strategies

    Full text link
    We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks.Comment: 49 pages, 24 figures, 1 table; review article, Journal of Computational Neuroscience, in press (2007
    corecore