2,463 research outputs found

    On Longevity of I-ball/Oscillon

    Full text link
    We study I-balls/oscillons, which are long-lived, quasi-periodic, and spatially localized solutions in real scalar field theories. Contrary to the case of Q-balls, there is no evident conserved charge that stabilizes the localized configuration. Nevertheless, in many classical numerical simulations, it has been shown that they are extremely long-lived. In this paper, we clarify the reason for the longevity, and show how the exponential separation of time scales emerges dynamically. Those solutions are time-periodic with a typical frequency of a mass scale of a scalar field. This observation implies that they can be understood by the effective theory after integrating out relativistic modes. We find that the resulting effective theory has an approximate global U(1) symmetry reflecting an approximate number conservation in the non-relativistic regime. As a result, the profile of those solutions is obtained via the bounce method, just like Q-balls, as long as the breaking of the U(1) symmetry is small enough. We then discuss the decay processes of the I-ball/oscillon by the breaking of the U(1) symmetry, namely the production of relativistic modes via number violating processes. We show that the imaginary part is exponentially suppressed, which explains the extraordinary longevity of I-ball/oscillon. In addition, we find that there are some attractor behaviors during the evolution of I-ball/oscillon that further enhance the lifetime. The validity of our effective theory is confirmed by classical numerical simulations. Our formalism may also be useful to study condensates of ultra light bosonic dark matter, such as fuzzy dark matter, and axion stars, for instance.Comment: 31 pages, 8 figures; v2: typos fixed, published version; v3: typos in the figures fixe

    The dynamics of digits: Calculating pi with Galperin's billiards

    Get PDF
    In Galperin billiards, two balls colliding with a hard wall form an analog calculator for the digits of the number π\pi. This classical, one-dimensional three-body system (counting the hard wall) calculates the digits of π\pi in a base determined by the ratio of the masses of the two particles. This base can be any integer, but it can also be an irrational number, or even the base can be π\pi itself. This article reviews previous results for Galperin billiards and then pushes these results farther. We provide a complete explicit solution for the balls' positions and velocities as a function of the collision number and time. We demonstrate that Galperin billiard can be mapped onto a two-particle Calogero-type model. We identify a second dynamical invariant for any mass ratio that provides integrability for the system, and for a sequence of specific mass ratios we identify a third dynamical invariant that establishes superintegrability. Integrability allows us to derive some new exact results for trajectories, and we apply these solutions to analyze the systematic errors that occur in calculating the digits of π\pi with Galperin billiards, including curious cases with irrational number bases.Comment: 30 pages, 13 figure

    A geometrical calibration method for the PIXSCAN micro-CT scanner

    Get PDF
    Reconstruction in Cone-Beam Tomography can suffer from artifacts due to geometrical misalignments of the source-detector system. They can be avoided by a complete and precise description of the system. We present a high precision method for the geometric calibration for the PIXSCAN, a small animal X-ray CT scanner demonstrator based on hybrid pixel detectors (XPAD2). The specificities of the XPAD2 detectors (dead pixels, tilts and gaps between modules...) made the calibration of the PIXSCAN quite difficult. The method uses a calibration object consisting of a hollow cylinder of polycarbonate on which we positioned four metallic balls. It requires 360 X-ray images (1° increments). An analytic expression of the 3 image ellipses has been derived. It is used for a least square regression of the 13 alignment parameters after a correction of the internal XPAD2 geometry. Our method is fast and completely automated, achieving a precision of about 30 μm

    Approximating the Maximum Overlap of Polygons under Translation

    Full text link
    Let PP and QQ be two simple polygons in the plane of total complexity nn, each of which can be decomposed into at most kk convex parts. We present an (1ε)(1-\varepsilon)-approximation algorithm, for finding the translation of QQ, which maximizes its area of overlap with PP. Our algorithm runs in O(cn)O(c n) time, where cc is a constant that depends only on kk and ε\varepsilon. This suggest that for polygons that are "close" to being convex, the problem can be solved (approximately), in near linear time

    Learning Generative Models with Sinkhorn Divergences

    Full text link
    The ability to compare two degenerate probability distributions (i.e. two probability distributions supported on two distinct low-dimensional manifolds living in a much higher-dimensional space) is a crucial problem arising in the estimation of generative models for high-dimensional observations such as those arising in computer vision or natural language. It is known that optimal transport metrics can represent a cure for this problem, since they were specifically designed as an alternative to information divergences to handle such problematic scenarios. Unfortunately, training generative machines using OT raises formidable computational and statistical challenges, because of (i) the computational burden of evaluating OT losses, (ii) the instability and lack of smoothness of these losses, (iii) the difficulty to estimate robustly these losses and their gradients in high dimension. This paper presents the first tractable computational method to train large scale generative models using an optimal transport loss, and tackles these three issues by relying on two key ideas: (a) entropic smoothing, which turns the original OT loss into one that can be computed using Sinkhorn fixed point iterations; (b) algorithmic (automatic) differentiation of these iterations. These two approximations result in a robust and differentiable approximation of the OT loss with streamlined GPU execution. Entropic smoothing generates a family of losses interpolating between Wasserstein (OT) and Maximum Mean Discrepancy (MMD), thus allowing to find a sweet spot leveraging the geometry of OT and the favorable high-dimensional sample complexity of MMD which comes with unbiased gradient estimates. The resulting computational architecture complements nicely standard deep network generative models by a stack of extra layers implementing the loss function
    corecore