58 research outputs found
How shoud prey animals respond to uncertain threats?
A prey animal surveying its environment must decide whether there is a
dangerous predator present or not. If there is, it may flee. Flight has an
associated cost, so the animal should not flee if there is no danger. However,
the prey animal cannot know the state of its environment with certainty, and is
thus bound to make some errors. We formulate a probabilistic automaton model of
a prey animal's life and use it to compute the optimal escape decision
strategy, subject to the animal's uncertainty. The uncertainty is a major
factor in determining the decision strategy: only in the presence of
uncertainty do economic factors (like mating opportunities lost due to flight)
influence the decision. We performed computer simulations and found that
\emph{in silico} populations of animals subject to predation evolve to display
the strategies predicted by our model, confirming our choice of objective
function for our analytic calculations. To the best of our knowledge, this is
the first theoretical study of escape decisions to incorporate the effects of
uncertainty, and to demonstrate the correctness of the objective function used
in the model.Comment: 5 figures, 10 pages of tex
A sparse coding model with synaptically local plasticity and spiking neurons can account for the diverse shapes of V1 simple cell receptive fields
Sparse coding algorithms trained on natural images can accurately predict the
features that excite visual cortical neurons, but it is not known whether such
codes can be learned using biologically realistic plasticity rules. We have
developed a biophysically motivated spiking network, relying solely on
synaptically local information, that can predict the full diversity of V1
simple cell receptive field shapes when trained on natural images. This
represents the first demonstration that sparse coding principles, operating
within the constraints imposed by cortical architecture, can successfully
reproduce these receptive fields. We further prove, mathematically, that
sparseness and decorrelation are the key ingredients that allow for
synaptically local plasticity rules to optimize a cooperative, linear
generative image model formed by the neural representation. Finally, we discuss
several interesting emergent properties of our network, with the intent of
bridging the gap between theoretical and experimental studies of visual cortex.Comment: 33 pages, 6 figures. To appear in PLoS Computational Biology. Some of
these data were presented by author JZ at the 2011 CoSyNe meeting in Salt
Lake Cit
The Physical Demands of NCAA Division I Women\u27s College Soccer
Extensive research into women\u27s collegiate soccer is scarce, leaving gaps in the literature with little information available detailing the physical demands at different standards of play. Our purpose was to elucidate the physical demands of the Division I collegiate level and identify differences between playing positions. Twenty-three field players were observed during four competitive seasons using 10-Hz GPS units (Catapult Sports, Melbourne, Australia). Descriptive statistics and 95% confidence intervals were used to determine group and position-specific physical demands. Linear mixed modelling (LMM) was used to compare attacker, midfielder, and defender position groups. Total distance, high-speed distance, and sprint distance were 9486 ± 300 m, 1014 ± 118 m, and 428 ± 70 m, respectively. Furthermore, attackers were observed to cover the greatest distance at all speeds compared to midfielders and defenders. Our findings suggest that the physical demands of Division I women\u27s soccer differ by position and appear lower compared to higher standards of play. Therefore, coaches and sports scientists responsible for the physical training of Division I collegiate players should consider the specific physical demands of the collegiate level and playing position when prescribing training, as well as in the development of their annual training programs
Thermodynamic Computing
The hardware and software foundations laid in the first half of the 20th
Century enabled the computing technologies that have transformed the world, but
these foundations are now under siege. The current computing paradigm, which is
the foundation of much of the current standards of living that we now enjoy,
faces fundamental limitations that are evident from several perspectives. In
terms of hardware, devices have become so small that we are struggling to
eliminate the effects of thermodynamic fluctuations, which are unavoidable at
the nanometer scale. In terms of software, our ability to imagine and program
effective computational abstractions and implementations are clearly challenged
in complex domains. In terms of systems, currently five percent of the power
generated in the US is used to run computing systems - this astonishing figure
is neither ecologically sustainable nor economically scalable. Economically,
the cost of building next-generation semiconductor fabrication plants has
soared past $10 billion. All of these difficulties - device scaling, software
complexity, adaptability, energy consumption, and fabrication economics -
indicate that the current computing paradigm has matured and that continued
improvements along this path will be limited. If technological progress is to
continue and corresponding social and economic benefits are to continue to
accrue, computing must become much more capable, energy efficient, and
affordable. We propose that progress in computing can continue under a united,
physically grounded, computational paradigm centered on thermodynamics. Herein
we propose a research agenda to extend these thermodynamic foundations into
complex, non-equilibrium, self-organizing systems and apply them holistically
to future computing systems that will harness nature's innate computational
capacity. We call this type of computing "Thermodynamic Computing" or TC.Comment: A Computing Community Consortium (CCC) workshop report, 36 page
MagneToRE: Mapping the 3-D Magnetic Structure of the Solar Wind Using a Large Constellation of Nanosatellites
Unlike the vast majority of astrophysical plasmas, the solar wind is accessible to spacecraft, which for decades have carried in-situ instruments for directly measuring its particles and fields. Though such measurements provide precise and detailed information, a single spacecraft on its own cannot disentangle spatial and temporal fluctuations. Even a modest constellation of in-situ spacecraft, though capable of characterizing fluctuations at one or more scales, cannot fully determine the plasma’s 3-D structure. We describe here a concept for a new mission, the Magnetic Topology Reconstruction Explorer (MagneToRE), that would comprise a large constellation of in-situ spacecraft and would, for the first time, enable 3-D maps to be reconstructed of the solar wind’s dynamic magnetic structure. Each of these nanosatellites would be based on the CubeSat form-factor and carry a compact fluxgate magnetometer. A larger spacecraft would deploy these smaller ones and also serve as their telemetry link to the ground and as a host for ancillary scientific instruments. Such an ambitious mission would be feasible under typical funding constraints thanks to advances in the miniaturization of spacecraft and instruments and breakthroughs in data science and machine learning
Time-Warp–Invariant Neuronal Processing
A biophysical mechanism acting in auditory neurons allows the brain to process the high variability of speaking rates in natural speech in a time-warp-invariant manner
Long-term modification of cortical synapses improves sensory perception
Synapses and receptive fields of the cerebral cortex are plastic. However, changes to specific inputs must be coordinated within neural networks to ensure that excitability and feature selectivity are appropriately configured for perception of the sensory environment. Long-lasting enhancements and decrements to rat primary auditory cortical excitatory synaptic strength were induced by pairing acoustic stimuli with activation of the nucleus basalis neuromodulatory system. Here we report that these synaptic modifications were approximately balanced across individual receptive fields, conserving mean excitation while reducing overall response variability. Decreased response variability should increase detection and recognition of near-threshold or previously imperceptible stimuli, as we found in behaving animals. Thus, modification of cortical inputs leads to wide-scale synaptic changes, which are related to improved sensory perception and enhanced behavioral performance
Sparse coding models can exhibit decreasing sparseness while learning sparse codes for natural images.
The sparse coding hypothesis has enjoyed much success in predicting response properties of simple cells in primary visual cortex (V1) based solely on the statistics of natural scenes. In typical sparse coding models, model neuron activities and receptive fields are optimized to accurately represent input stimuli using the least amount of neural activity. As these networks develop to represent a given class of stimulus, the receptive fields are refined so that they capture the most important stimulus features. Intuitively, this is expected to result in sparser network activity over time. Recent experiments, however, show that stimulus-evoked activity in ferret V1 becomes less sparse during development, presenting an apparent challenge to the sparse coding hypothesis. Here we demonstrate that some sparse coding models, such as those employing homeostatic mechanisms on neural firing rates, can exhibit decreasing sparseness during learning, while still achieving good agreement with mature V1 receptive field shapes and a reasonably sparse mature network state. We conclude that observed developmental trends do not rule out sparseness as a principle of neural coding per se: a mature network can perform sparse coding even if sparseness decreases somewhat during development. To make comparisons between model and physiological receptive fields, we introduce a new nonparametric method for comparing receptive field shapes using image registration techniques
Recommended from our members
On the Sparse Structure of Natural Sounds and Natural Images: Similarities, Differences, and Implications for Neural Coding
Sparse coding models of natural images and sounds have been able to predict several response properties of neurons in the visual and auditory systems. While the success of these models suggests that the structure they capture is universal across domains to some degree, it is not yet clear which aspects of this structure are universal and which vary across sensory modalities. To address this, we fit complete and highly overcomplete sparse coding models to natural images and spectrograms of speech and report on differences in the statistics learned by these models. We find several types of sparse features in natural images, which all appear in similar, approximately Laplace distributions, whereas the many types of sparse features in speech exhibit a broad range of sparse distributions, many of which are highly asymmetric. Moreover, individual sparse coding units tend to exhibit higher lifetime sparseness for overcomplete models trained on images compared to those trained on speech. Conversely, population sparseness tends to be greater for these networks trained on speech compared with sparse coding models of natural images. To illustrate the relevance of these findings to neural coding, we studied how they impact a biologically plausible sparse coding network's representations in each sensory modality. In particular, a sparse coding network with synaptically local plasticity rules learns different sparse features from speech data than are found by more conventional sparse coding algorithms, but the learned features are qualitatively the same for these models when trained on natural images
- …