8,033 research outputs found

    Building Social Movement Unionism: The Transformation of the American Labor Movement

    Get PDF
    [Excerpt] In the United States, the renewed energy displayed by the labor movement is particularly promising. From organizing drives to strike victories to legislative campaigns, labor\u27s renewed influence in the American political economy is clearly seen. A labor movement that was left for dead by many in the Reagan era has developed new leadership and innovative strategies for rank-and-file mobilization and political clout. In a global economy dominated to a large extent by American-based multinational corporations, the world needs a strong American labor movement. The goal of the new activists, young and old, who drive today\u27s labor campaigns, is the rebirth of modernized, mobilized, powerful American unions. We suggest that innovations at the heart of the current revitalization are part of a broad shift away from traditional postwar unionism to a new social movement unionism. The transformation occurs in a weak institutional context in which experimentation and innovation are possible. Driving the change are two generations of activists: veterans of the social movements of the 1960s, now in leadership positions at the American Federation of Labor-Congress of Industrial Organizations (AFL-CIO) and in many member unions, and a new generation of campus and workplace activists

    Mobility Measurements Probe Conformational Changes in Membrane Proteins due to Tension

    Full text link
    The function of membrane-embedded proteins such as ion channels depends crucially on their conformation. We demonstrate how conformational changes in asymmetric membrane proteins may be inferred from measurements of their diffusion. Such proteins cause local deformations in the membrane, which induce an extra hydrodynamic drag on the protein. Using membrane tension to control the magnitude of the deformations and hence the drag, measurements of diffusivity can be used to infer--- via an elastic model of the protein--- how conformation is changed by tension. Motivated by recent experimental results [Quemeneur et al., Proc. Natl. Acad. Sci. USA, 111 5083 (2014)] we focus on KvAP, a voltage-gated potassium channel. The conformation of KvAP is found to change considerably due to tension, with its `walls', where the protein meets the membrane, undergoing significant angular strains. The torsional stiffness is determined to be 26.8 kT at room temperature. This has implications for both the structure and function of such proteins in the environment of a tension-bearing membrane.Comment: Manuscript: 4 pages, 4 figures. Supplementary Material: 8 pages, 1 figur

    Learning complex tasks with probabilistic population codes

    Get PDF
    Recent psychophysical experiments imply that the brain employs a neural representation of the uncertainty in sensory stimuli and that probabilistic computations are supported by the cortex. Several candidate neural codes for uncertainty have been posited including Probabilistic Population Codes (PPCs). PPCs support various versions of probabilistic inference and marginalisation in a neurally plausible manner. However, in order to establish whether PPCs can be of general use, three important limitations must be addressed. First, it is critical that PPCs support learning. For example, during cue combination, subjects are able to learn the uncertainties associated with the sensory cues as well as the prior distribution over the stimulus. However, previous modelling work with PPCs requires these parameters to be carefully set by hand. Second, PPCs must be able to support inference in non-linear models. Previous work has focused on linear models and it is not clear whether non-linear models can be implemented in a neurally plausible manner. Third, PPCs must be shown to scale to high-dimensional problems with many variables. This contribution addresses these three limitations of PPCs by establishing a connection with variational Expectation Maximisation (vEM). In particular, we show that the usual PPC update for cue combination can be interpreted as the E-Step of a vEM algorithm. The corresponding M-Step then automatically provides a method for learning the parameters of the model by adapting the connection strengths in the PPC network in an unsupervised manner. Using a version of sparse coding as an example, we show that the vEM interpretation of PPC can be extended to non-linear and multi-dimensional models and we show how the approach scales with the dimensionality of the problem. Our results provide a rigorous assessment of the ability of PPCs to capture the probabilistic computations performed in the cortex.
&#xa

    The Army of One (Sample): the Characteristics of Sampling-based Probabilistic Neural Representations

    Get PDF
    There is growing evidence that humans and animals represent the uncertainty associated with sensory stimuli and utilize this uncertainty during planning and decision making in a statistically optimal way. Recently, a nonparametric framework for representing probabilistic information has been proposed whereby neural activity encodes samples from the distribution over external variables. Although such sample-based probabilistic representations have strong empirical and theoretical support, two major issues need to be clarified before they can be considered as viable candidate theories of cortical computation. First, in a fluctuating natural environment, can neural dynamics provide sufficient samples to accurately estimate a stimulus? Second, can such a code support accurate learning over biologically plausible time-scales? Although it is well known that sampling is statistically optimal if the number of samples is unlimited, biological constraints mean that estimation and learning in the cortex must be supported by a relatively small number of possibly dependent samples. We explored these issues in a cue combination task by comparing a neural circuit that employed a sampling-based representation to an optimal estimator. For static stimuli, we found that a single sample is sufficient to obtain an estimator with less than twice the optimal variance, and that performance improves with the inverse square root of the number of samples. For dynamic stimuli, with linear-Gaussian evolution, we found that the efficiency of the estimation improves significantly as temporal information stabilizes the estimate, and because sampling does not require a burn-in phase. Finally, we found that using a single sample, the dynamic model can accurately learn the parameters of the input neural populations up to a general scaling factor, which disappears for modest sample size. These results suggest that sample-based representations can support estimation and learning using a relatively small number of samples and are therefore highly feasible alternatives for performing probabilistic cortical computations.
&#xa
    • …
    corecore