30 research outputs found

    Towards vertex renormalization in 4d Spin Foams

    Get PDF
    A long-standing open problem in 4-dimensional Spin Foam models of Quantum Gravity has been the behavior of the amplitudes under coarse graining. In this thesis, we attempt to study this question by using the recent reformulation of Spin Foam amplitudes in terms of spinors. We define a new model by imposing the holomorphic simplicity constraints in an alternative way, which facilitates greatly simplified calculations. We show that the simplification does not come at the cost of loss of the correct semi-classical limit, as the model has the same asymptotic behavior as the usual approach. Using the power of the holomorphic integration techniques, and with the introduction of two new tools: the homogeneity map and the loop identity, for the first time we give the analytic expressions for the behavior of the Spin Foam amplitudes under 4-dimensional Pachner moves. We show that the coarse-graining 5--1 move generates non-geometrical couplings, but we find a natural truncation scheme that restricts the flow to the space of 4-simplices. Under this truncation scheme, the 3--3 Pachner move is only invariant for symmetric configurations, while the 4--2 and 5--1 moves are invariant up to an overall possibly divergent factor depending on boundary spins. The study of the divergences shows that there is a range of parameter space for which the 4--2 move is finite while the 5--1 move diverges, which distinguishes the model from the topological case. We then show that the amplitude after the 5--1 move cannot be written as a symmetric local product of renormalized edge propagators, but instead has to be written in terms of a vertex amplitude. The study of the additional nonlocal function of the boundary spins shows a transition, at which the spin dependence is very slow, suggesting the existence of an approximate notion of a vertex translation symmetry. We conclude with a proposal for an amplitude, where iterated 5--1 Pachner moves only renormalize this nonlocal function at a vertex, and in which all the divergences can be absorbed by a single coupling constant

    Pachner moves in a 4d Riemannian holomorphic Spin Foam model

    Full text link
    In this work we study a Spin Foam model for 4d Riemannian gravity, and propose a new way of imposing the simplicity constraints that uses the recently developed holomorphic representation. Using the power of the holomorphic integration techniques, and with the introduction of two new tools: the homogeneity map and the loop identity, for the first time we give the analytic expressions for the behaviour of the Spin Foam amplitudes under 4-dimensional Pachner moves. It turns out that this behaviour is controlled by an insertion of nonlocal mixing operators. In the case of the 5-1 move, the expression governing the change of the amplitude can be interpreted as a vertex renormalisation equation. We find a natural truncation scheme that allows us to get an invariance up to an overall factor for the 4-2 and 5-1 moves, but not for the 3-3 move. The study of the divergences shows that there is a range of parameter space for which the 4-2 move is finite while the 5-1 move diverges. This opens up the possibility to recover diffeomorphism invariance in the continuum limit of Spin Foam models for 4D Quantum Gravity.Comment: 48 pages, 30 figure

    Biologically Inspired Mechanisms for Adversarial Robustness

    Full text link
    A convolutional neural network strongly robust to adversarial perturbations at reasonable computational and performance cost has not yet been demonstrated. The primate visual ventral stream seems to be robust to small perturbations in visual stimuli but the underlying mechanisms that give rise to this robust perception are not understood. In this work, we investigate the role of two biologically plausible mechanisms in adversarial robustness. We demonstrate that the non-uniform sampling performed by the primate retina and the presence of multiple receptive fields with a range of receptive field sizes at each eccentricity improve the robustness of neural networks to small adversarial perturbations. We verify that these two mechanisms do not suffer from gradient obfuscation and study their contribution to adversarial robustness through ablation studies.Comment: 25 pages, 15 figure
    corecore