271 research outputs found

    Microscopic description of Gamow-Teller transitions in middle pf--shell nuclei by a realistic shell model calculation

    Get PDF
    GT transitions in N=28∼30N=28\sim 30 nuclei are studied in terms of a large-scale realistic shell-model calculation, by using Towner's microscopic parameters. B(GT)B({\rm GT}) values to low-lying final states are reproduced with a reasonable accuracy. Several gross properties with respect to the GT transitions are investigated with this set of the wavefunctions and the operator. While the calculated total GT−^- strengths show no apparent disagreement with the measured ones, the calculated total GT+^+ strengths are somewhat larger than those obtained from charge-exchange experiments. Concerning the Ikeda sum-rule, the proportionality of SGTS_{\rm GT} to (N−Z)(N-Z) persists to an excellent approximation, with a quenching factor of 0.68. For the relative GT−^- strengths among possible isospin components, the lowest isospin component gathers greater fraction than expected by the squared CG coefficients of the isospin coupling. It turns out that these relative strengths are insensitive to the size of model space. Systematics of the summed B(GT)B({\rm GT}) values are discussed for each isospin component.Comment: IOP-LaTeX 23 pages, to appear in J. Phys. G., 5 Postscript figures available upon reques

    Low-Spin Spectroscopy of 50Mn

    Get PDF
    The data on low spin states in the odd-odd nucleus 50Mn investigated with the 50Cr(p,ngamma)50Mn fusion evaporation reaction at the FN-TANDEM accelerator in Cologne are reported. Shell model and collective rotational model interpretations of the data are given.Comment: 7 pages, 2 figures, to be published in the proceedings of the "Bologna 2000 - Structure of the Nucleus at the Dawn of the Century" Conference, (Bologna, Italy, May 29 - June 3, 2000

    Foreword ACII 2013

    Get PDF

    Flow Factorized Representation Learning

    Full text link
    A prominent goal of representation learning research is to achieve representations which are factorized in a useful manner with respect to the ground truth factors of variation. The fields of disentangled and equivariant representation learning have approached this ideal from a range of complimentary perspectives; however, to date, most approaches have proven to either be ill-specified or insufficiently flexible to effectively separate all realistic factors of interest in a learned latent space. In this work, we propose an alternative viewpoint on such structured representation learning which we call Flow Factorized Representation Learning, and demonstrate it to learn both more efficient and more usefully structured representations than existing frameworks. Specifically, we introduce a generative model which specifies a distinct set of latent probability paths that define different input transformations. Each latent flow is generated by the gradient field of a learned potential following dynamic optimal transport. Our novel setup brings new understandings to both \textit{disentanglement} and \textit{equivariance}. We show that our model achieves higher likelihoods on standard representation learning benchmarks while simultaneously being closer to approximately equivariant models. Furthermore, we demonstrate that the transformations learned by our model are flexibly composable and can also extrapolate to new data, implying a degree of robustness and generalizability approaching the ultimate goal of usefully factorized representation learning.Comment: NeurIPS2
    • …
    corecore