2,287 research outputs found

    The unicity of types for depth-zero supercuspidal representations

    Get PDF
    We establish the unicity of types for depth-zero supercuspidal representations of an arbitrary pp-adic group GG, showing that each depth-zero supercuspidal representation of GG contains a unique conjugacy class of typical representations of maximal compact subgroups of GG. As a corollary, we obtain an inertial Langlands correspondence for these representations, via the Langlands correspondence of DeBacker and Reeder.Comment: 23 pages. Updated with minor revisions; to appear in Representation Theor

    How biased are maximum entropy models?

    Get PDF
    Maximum entropy models have become popular statistical models in neuroscience and other areas in biology, and can be useful tools for obtaining estimates of mutual information in biological systems. However, maximum entropy models fit to small data sets can be subject to sampling bias; i.e. the true entropy of the data can be severely underestimated. Here we study the sampling properties of estimates of the entropy obtained from maximum entropy models. We show that if the data is generated by a distribution that lies in the model class, the bias is equal to the number of parameters divided by twice the number of observations. However, in practice, the true distribution is usually outside the model class, and we show here that this misspecification can lead to much larger bias. We provide a perturbative approximation of the maximally expected bias when the true model is out of model class, and we illustrate our results using numerical simulations of an Ising model; i.e. the second-order maximum entropy distribution on binary data.

    A balanced memory network

    Get PDF
    A fundamental problem in neuroscience is understanding how working memory-the ability to store information at intermediate timescales, like tens of seconds-is implemented in realistic neuronal networks. The most likely candidate mechanism is the attractor network, and a great deal of effort has gone toward investigating it theoretically. Yet, despite almost a quarter century of intense work, attractor networks are not fully understood. In particular, there are still two unanswered questions. First, how is it that attractor networks exhibit irregular firing, as is observed experimentally during working memory tasks? And second, how many memories can be stored under biologically realistic conditions? Here we answer both questions by studying an attractor neural network in which inhibition and excitation balance each other. Using mean-field analysis, we derive a three-variable description of attractor networks. From this description it follows that irregular firing can exist only if the number of neurons involved in a memory is large. The same mean-field analysis also shows that the number of memories that can be stored in a network scales with the number of excitatory connections, a result that has been suggested for simple models but never shown for realistic ones. Both of these predictions are verified using simulations with large networks of spiking neurons

    Unicity of types for supercuspidal representations of p-adic SL2

    Get PDF
    We consider the question of unicity of types on maximal compact subgroups for supercuspidal representations of SL2 over a nonarchimedean local field of odd residual characteristic. We introduce the notion of an archetype as the SL2-conjugacy class of a typical representation of a maximal compact subgroup, and go on to show that any archetype in SL2 is restricted from one in GL2. From this it follows that any archetype must be induced from a Bushnell-Kutzko type. Given a supercuspidal representation π of SL2(F), we give an additional explicit description of the number of archetypes admitted by π in terms of its ramification. We also describe a relationship between archetypes for GL2 and SL2 in terms of L-packets, and deduce an inertial Langlands correspondence for SL2
    corecore