111 research outputs found

    Classification of pp-groups via their 22-nilpotent multipliers

    Full text link
    For a pp-group of order pnp^n, it is known that the order of 22-nilpotent multiplier is equal to |\mathcal{M}^{(2)}(G)|=p^{\f12n(n-1)(n-2)+3-s_2(G)} for an integer s2(G)s_2(G). In this article, we characterize all of non abelian pp-groups satisfying in $s_2(G)\in\{1,2,3\}

    Characterizing nilpotent Lie algebras that satisfy on converse of the Schur's theorem

    Full text link
    Let L L be a finite dimensional nilpotent Lie algebra and d d be the minimal number generators for L/Z(L). L/Z(L). It is known that dim⁑L/Z(L)=ddim⁑L2βˆ’t(L) \dim L/Z(L)=d \dim L^{2}-t(L) for an integer t(L)β‰₯0. t(L)\geq 0. In this paper, we classify all finite dimensional nilpotent Lie algebras L L when t(L)∈{0,1,2}. t(L)\in \lbrace 0, 1, 2 \rbrace. We find also a construction, which shows that there exist Lie algebras of arbitrary $ t(L).

    On the Schur multipliers of Lie superalgebras of maximal class

    Full text link
    Let LL be a non-abelian nilpotent Lie superalgebra of dimensiom (m∣n)(m|n). Nayak shows there is a non-negative s(L)s(L) such that s(L)=12(m+nβˆ’2)(m+nβˆ’1)+n+1βˆ’dim⁑M(L)s(L)=\frac{1}{2}(m+n-2)(m+n-1)+n+1-\dim{\mathcal{M}(L)}. Here we intend that classify all non-abelian nilpotent Lie superalgebras, when 1≀s(L)≀101\leq s(L)\leq 10. Moreover, we classify the structure of all Lie superalgebras of dimension at most 55 such that dim⁑L2=dim⁑M(L)\dim {L^2}=\dim {\mathcal{M}(L)}

    The exterior degree of a pair of finite groups

    Full text link
    The exterior degree of a pair of finite groups (G,N)(G,N), which is a generalization of the exterior degree of finite groups, is the probability for two elements (g,n)(g,n) in (G,N)(G,N) such that g∧n=1g\wedge n=1. In the present paper, we state some relations between this concept and the relative commutatively degree, capability and the Schur multiplier of a pair of groups.Comment: To appear in Mediterr. J. Mat

    Physics-Inspired Interpretability Of Machine Learning Models

    Full text link
    The ability to explain decisions made by machine learning models remains one of the most significant hurdles towards widespread adoption of AI in highly sensitive areas such as medicine, cybersecurity or autonomous driving. Great interest exists in understanding which features of the input data prompt model decision making. In this contribution, we propose a novel approach to identify relevant features of the input data, inspired by methods from the energy landscapes field, developed in the physical sciences. By identifying conserved weights within groups of minima of the loss landscapes, we can identify the drivers of model decision making. Analogues to this idea exist in the molecular sciences, where coordinate invariants or order parameters are employed to identify critical features of a molecule. However, no such approach exists for machine learning loss landscapes. We will demonstrate the applicability of energy landscape methods to machine learning models and give examples, both synthetic and from the real world, for how these methods can help to make models more interpretable.Comment: 6 pages, 2 figures, ICLR 2023 Workshop on Physics for Machine Learnin
    • …
    corecore