28 research outputs found

    Statistics of electromagnetic transitions as a signature of chaos in many-electron atoms

    Full text link
    Using a configuration interaction approach we study statistics of the dipole matrix elements (E1 amplitudes) between the 14 lower odd states with J=4 and 21st to 100th even states with J=4 in the Ce atom (1120 lines). We show that the distribution of the matrix elements is close to Gaussian, although the width of the Gaussian distribution, i.e. the root-mean-square matrix element, changes with the excitation energy. The corresponding line strengths are distributed according to the Porter-Thomas law which describes statistics of transition strengths between chaotic states in compound nuclei. We also show how to use a statistical theory to calculate mean squared values of the matrix elements or transition amplitudes between chaotic many-body states. We draw some support for our conclusions from the analysis of the 228 experimental line strengths in Ce [J. Opt. Soc. Am. v. 8, p. 1545 (1991)], although direct comparison with the calculations is impeded by incompleteness of the experimental data. Nevertheless, the statistics observed evidence that highly excited many-electron states in atoms are indeed chaotic.Comment: 16 pages, REVTEX, 4 PostScript figures (submitted to Phys Rev A

    Irreducible tensor-form of the relativistic corrections to the M1 transition operator

    Full text link
    The relativistic corrections to the magnetic dipole moment operator in the Pauli approximation were derived originally by Drake (Phys. Rev. A 3(1971)908). In the present paper, we derive their irreducible tensor-operator form to be used in atomic structure codes adopting the Fano-Racah-Wigner algebra for calculating its matrix elements.Comment: 26 page

    Guess what moves: unsupervised video and image segmentation by anticipating motion

    No full text
    Motion, measured via optical flow, provides a powerful cue to discover and learn objects in images and videos. However, compared to using appearance, it has some blind spots, such as the fact that objects become invisible if they do not move. In this work, we propose an approach that combines the strengths of motion-based and appearance-based segmentation. We propose to supervise an image segmentation network with the pretext task of predicting regions that are likely to contain simple motion patterns, and thus likely to correspond to objects. As the model only uses a single image as input, we can apply it in two settings: unsupervised video segmentation, and unsupervised image segmentation. We achieve state-of-the-art results for videos, and demonstrate the viability of our approach on still images containing novel objects. Additionally we experiment with different motion models and optical flow backbones and find the method to be robust to these change. Project page and code available at https://www.robots.ox.ac.uk/~vgg/research/gwm

    Guess what moves: unsupervised video and image segmentation by anticipating motion

    No full text
    Motion, measured via optical flow, provides a powerful cue to discover and learn objects in images and videos. However, compared to using appearance, it has some blind spots, such as the fact that objects become invisible if they do not move. In this work, we propose an approach that combines the strengths of motion-based and appearance-based segmentation. We propose to supervise an image segmentation network with the pretext task of predicting regions that are likely to contain simple motion patterns, and thus likely to correspond to objects. As the model only uses a single image as input, we can apply it in two settings: unsupervised video segmentation, and unsupervised image segmentation. We achieve state-of-the-art results for videos, and demonstrate the viability of our approach on still images containing novel objects. Additionally we experiment with different motion models and optical flow backbones and find the method to be robust to these change. Project page and code available at https://www.robots.ox.ac.uk/~vgg/research/gwm
    corecore