35 research outputs found
A new method reveals microtubule minus ends throughout the meiotic spindle
Anastral meiotic spindles are thought to be organized differently from astral mitotic spindles, but the field lacks the basic structural information required to describe and model them, including the location of microtubule-nucleating sites and minus ends. We measured the distributions of oriented microtubules in metaphase anastral spindles in Xenopus laevis extracts by fluorescence speckle microscopy and cross-correlation analysis. We localized plus ends by tubulin incorporation and combined this with the orientation data to infer the localization of minus ends. We found that minus ends are localized throughout the spindle, sparsely at the equator and at higher concentrations near the poles. Based on these data, we propose a model for maintenance of the metaphase steady-state that depends on continuous nucleation of microtubules near chromatin, followed by sorting and outward transport of stabilized minus ends, and, eventually, their loss near poles
The kinesin Eg5 drives poleward microtubule flux in Xenopus laevis egg extract spindles
Although mitotic and meiotic spindles maintain a steady-state length during metaphase, their antiparallel microtubules slide toward spindle poles at a constant rate. This “poleward flux” of microtubules occurs in many organisms and may provide part of the force for chromosome segregation. We use quantitative image analysis to examine the role of the kinesin Eg5 in poleward flux in metaphase Xenopus laevis egg extract spindles. Pharmacological inhibition of Eg5 results in a dose–responsive slowing of flux, and biochemical depletion of Eg5 significantly decreases the flux rate. Our results suggest that ensembles of nonprocessive Eg5 motors drive flux in metaphase Xenopus extract spindles
H-1 Nuclear Magnetic Resonance Spin-Lattice Relaxation, C-13 Magic-Angle-Spinning Nuclear Magnetic Resonance Spectroscopy, Differential Scanning Calorimetry, and X-Ray Diffraction of Two Polymorphs of 2,6-Di-Tert-Butylnaphthalene
Polymorphism, the presence of structurally distinct solid phases of the same chemical species, affords a unique opportunity to evaluate the structural consequences of intermolecular forces. The study of two polymorphs of 2,6-di-tert-butylnaphthalene by single-crystal x-ray diffraction, differential scanning calorimetry (DSC), C-13 magic-angle-spinning (MAS) nuclear magnetic resonance (NMR) spectroscopy, and H-1 NMR spin-lattice relaxation provides a picture of the differences in structure and dynamics in these materials. The subtle differences in structure, observed with x-ray diffraction and chemical shifts, strikingly affect the dynamics, as reflected in the relaxation measurements. We analyze the dynamics in terms of both discrete sums and continuous distributions of Poisson processes
Natural images main params calculated values
A data file containing calculated values measuring quantities such as reconstruction performance, average neuronal activity, etc for different points in the training sequence. These values were used directly to generate the figures in the manuscript
Natural image very sparse main params calculated values
A data file containing calculated values measuring quantities such as reconstruction performance, average neuronal activity, etc for different points in the training sequence. These values were used directly to generate Figure 10b in the manuscript
Recommended from our members
Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons
The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field’s Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks
MNIST main params calculated values
A data file containing calculated values measuring quantities such as reconstruction performance, average neuronal activity, etc for different points in the training sequence. These values were used directly to generate the figures in the manuscript
MNIST non-sparse params network over time
The data file where network state is recorded throughout the simulation training period, for the MNIST parameters used for the Figure 10c in the paper
MNIST very sparse params calculated values
A data file containing calculated values measuring quantities such as reconstruction performance, average neuronal activity, etc for different points in the training sequence. These values were used directly to generate Figure 10a in the manuscript
Natural images very sparse params network over time
The data file where network state is recorded throughout the simulation training period, for the very sparse natural image parameters used for the Figure 10b in the paper