3,653 research outputs found

    Irreducible Induced Representations of Fell Bundle C*-Algebras

    Full text link
    We give precise conditions under which irreducible representations associated to stability groups induce to irreducible representations for Fell bundle C*-algebras. This result generalizes an earlier result of Echterhoff and the second author. Because the Fell bundle construction subsumes most other examples of C*-algebras constructed from dynamical systems, our result percolates down to many different constructions including the many flavors of groupoid crossed products.Comment: Minor changes suggested by the referee. The paper is accepted for publication in Trans. Amer. Math. So

    A Classic Morita Equivalence Result for Fell Bundle C*-algebras

    Full text link
    We show how to extend a classic Morita Equivalence Result of Green's to the \cs-algebras of Fell bundles over transitive groupoids. Specifically, we show that if p:\B\to G is a saturated Fell bundle over a transitive groupoid GG with stability group H=G(u)H=G(u) at u\in \go, then \cs(G,\B) is Morita equivalent to \cs(H,\CC), where \CC=\B\restr H. As an application, we show that if p:\B\to G is a Fell bundle over a group GG and if there is a continuous GG-equivariant map \sigma:\Prim A\to G/H, where A=B(e)A=B(e) is the \cs-algebra of \B and HH is a closed subgroup, then \cs(G,\B) is Morita equivalent to \cs(H,\CC^{I}) where \CC^{I} is a Fell bundle over HH whose fibres are A/I\sme A/I-\ib s and I=⋂{ P:σ(P)=eH }I=\bigcap\set{P:\sigma(P)=eH}. Green's result is a special case of our application to bundles over groups.Comment: 10 Pages. Paper has been slightly reorganized and reformatted to appear in Math. Scand

    An Online Parallel and Distributed Algorithm for Recursive Estimation of Sparse Signals

    Full text link
    In this paper, we consider a recursive estimation problem for linear regression where the signal to be estimated admits a sparse representation and measurement samples are only sequentially available. We propose a convergent parallel estimation scheme that consists in solving a sequence of â„“1\ell_{1}-regularized least-square problems approximately. The proposed scheme is novel in three aspects: i) all elements of the unknown vector variable are updated in parallel at each time instance, and convergence speed is much faster than state-of-the-art schemes which update the elements sequentially; ii) both the update direction and stepsize of each element have simple closed-form expressions, so the algorithm is suitable for online (real-time) implementation; and iii) the stepsize is designed to accelerate the convergence but it does not suffer from the common trouble of parameter tuning in literature. Both centralized and distributed implementation schemes are discussed. The attractive features of the proposed algorithm are also numerically consolidated.Comment: Part of this work has been presented at The Asilomar Conference on Signals, Systems, and Computers, Nov. 201

    The Dixmier-Douady Classes of Certain Groupoid C∗C^*-Algebras with Continuous Trace

    Full text link
    Given a locally compact abelian group GG, we give an explicit formula for the Dixmier--Douady invariant of the C∗C^*-algebra of the groupoid extension associated to a \v{C}ech 22-cocycle in the sheaf of germs of continuous GG-valued functions. We then exploit the blow-up construction for groupoids to extend this to some more general central extensions of \'etale equivalence relations

    Entropy, majorization and thermodynamics in general probabilistic theories

    Full text link
    In this note we lay some groundwork for the resource theory of thermodynamics in general probabilistic theories (GPTs). We consider theories satisfying a purely convex abstraction of the spectral decomposition of density matrices: that every state has a decomposition, with unique probabilities, into perfectly distinguishable pure states. The spectral entropy, and analogues using other Schur-concave functions, can be defined as the entropy of these probabilities. We describe additional conditions under which the outcome probabilities of a fine-grained measurement are majorized by those for a spectral measurement, and therefore the "spectral entropy" is the measurement entropy (and therefore concave). These conditions are (1) projectivity, which abstracts aspects of the Lueders-von Neumann projection postulate in quantum theory, in particular that every face of the state space is the positive part of the image of a certain kind of projection operator called a filter; and (2) symmetry of transition probabilities. The conjunction of these, as shown earlier by Araki, is equivalent to a strong geometric property of the unnormalized state cone known as perfection: that there is an inner product according to which every face of the cone, including the cone itself, is self-dual. Using some assumptions about the thermodynamic cost of certain processes that are partially motivated by our postulates, especially projectivity, we extend von Neumann's argument that the thermodynamic entropy of a quantum system is its spectral entropy to generalized probabilistic systems satisfying spectrality.Comment: In Proceedings QPL 2015, arXiv:1511.0118

    Sparse Probit Linear Mixed Model

    Full text link
    Linear Mixed Models (LMMs) are important tools in statistical genetics. When used for feature selection, they allow to find a sparse set of genetic traits that best predict a continuous phenotype of interest, while simultaneously correcting for various confounding factors such as age, ethnicity and population structure. Formulated as models for linear regression, LMMs have been restricted to continuous phenotypes. We introduce the Sparse Probit Linear Mixed Model (Probit-LMM), where we generalize the LMM modeling paradigm to binary phenotypes. As a technical challenge, the model no longer possesses a closed-form likelihood function. In this paper, we present a scalable approximate inference algorithm that lets us fit the model to high-dimensional data sets. We show on three real-world examples from different domains that in the setup of binary labels, our algorithm leads to better prediction accuracies and also selects features which show less correlation with the confounding factors.Comment: Published version, 21 pages, 6 figure
    • …
    corecore