2,714 research outputs found

    Approximate and pseudo-amenability of various classes of Banach algebras

    Get PDF
    We continue the investigation of notions of approximate amenability that were introduced in work of the second and third authors. It is shown that every boundedly approximately contractible Banach algebra has a bounded approximate identity. Among our other results, it is shown that the Fourier algebra of the free group on two generators is not approximately amenable. Further examples are obtained of ℓ1{\ell}^1-semigroup algebras which are approximately amenable but not amenable; using these, we show that bounded approximate amenability need not imply sequential approximate amenability. Results are also given for Segal subalgebras of L1(G)L^1(G), where GG is a locally compact group, and the algebras PFp(Γ)PF_p(\Gamma) of pp-pseudofunctions on a discrete group Γ\Gamma (of which the reduced C∗C^*-algebra is a special case).Comment: 35 pages, revision of Jan '08 preprint. Abstract and MSC added; bibliograpy updated; slight tweaks to Section 4; and correction of a few typos. The final version is to appear in J. Funct. Ana

    Characterizing Jordan derivations of matrix rings through zero products

    Full text link
    Let \Mn be the ring of all n×nn \times n matrices over a unital ring R\mathcal{R}, let M\mathcal{M} be a 2-torsion free unital \Mn-bimodule and let D:\Mn\rightarrow \mathcal{M} be an additive map. We prove that if D(\A)\B+ \A D(\B)+D(\B)\A+ \B D(\A)=0 whenever \A,\B\in \Mn are such that \A\B=\B\A=0, then D(\A)=\delta(\A)+\A D(\textbf{1}), where \delta:\Mn\rightarrow \mathcal{M} is a derivation and D(1)D(\textbf{1}) lies in the centre of M\mathcal{M}. It is also shown that DD is a generalized derivation if and only if D(\A)\B+ \A D(\B)+D(\B)\A+ \B D(\A)-\A D(\textbf{1})\B-\B D(\textbf{1})\A=0 whenever \A\B=\B\A=0. We apply this results to provide that any (generalized) Jordan derivation from \Mn into a 2-torsion free \Mn-bimodule (not necessarily unital) is a (generalized) derivation. Also, we show that if \varphi:\Mn\rightarrow \Mn is an additive map satisfying \varphi(\A \B+\B \A)=\A\varphi(\B)+\varphi(\B)\A \quad (\A,\B \in \Mn), then \varphi(\A)=\A\varphi(\textbf{1}) for all \A\in \Mn, where φ(1)\varphi(\textbf{1}) lies in the centre of \Mn. By applying this result we obtain that every Jordan derivation of the trivial extension of \Mn by \Mn is a derivation.Comment: To appear in Mathematica Slovac

    2n-Weak module amenability of semigroup algebras

    Full text link
    Let SS be an inverse semigroup with the set of idempotents EE. We prove that the semigroup algebra ℓ1(S)\ell^{1}(S) is always 2n2n-weakly module amenable as an ℓ1(E)\ell^{1}(E)-module, for any n∈Nn\in \mathbb{N}, where EE acts on SS trivially from the left and by multiplication from the right.Comment: arXiv admin note: text overlap with arXiv:1207.4514 by other author

    A Theoretically Grounded Application of Dropout in Recurrent Neural Networks

    Full text link
    Recurrent neural networks (RNNs) stand at the forefront of many recent developments in deep learning. Yet a major difficulty with these models is their tendency to overfit, with dropout shown to fail when applied to recurrent layers. Recent results at the intersection of Bayesian modelling and deep learning offer a Bayesian interpretation of common deep learning techniques such as dropout. This grounding of dropout in approximate Bayesian inference suggests an extension of the theoretical results, offering insights into the use of dropout with RNN models. We apply this new variational inference based dropout technique in LSTM and GRU models, assessing it on language modelling and sentiment analysis tasks. The new approach outperforms existing techniques, and to the best of our knowledge improves on the single model state-of-the-art in language modelling with the Penn Treebank (73.4 test perplexity). This extends our arsenal of variational tools in deep learning.Comment: Added clarifications; Published in NIPS 201
    • …
    corecore