4,751 research outputs found

    Manifesting enhanced cancellations in supergravity: integrands versus integrals

    Full text link
    Examples of "enhanced ultraviolet cancellations" with no known standard-symmetry explanation have been found in a variety of supergravity theories. By examining one- and two-loop examples in four- and five-dimensional half-maximal supergravity, we argue that enhanced cancellations in general cannot be exhibited prior to integration. In light of this, we explore reorganizations of integrands into parts that are manifestly finite and parts that have poor power counting but integrate to zero due to integral identities. At two loops we find that in the large loop-momentum limit the required integral identities follow from Lorentz and SL(2) relabeling symmetry. We carry out a nontrivial check at four loops showing that the identities generated in this way are a complete set. We propose that at LL loops the combination of Lorentz and SL(LL) symmetry is sufficient for displaying enhanced cancellations when they happen, whenever the theory is known to be ultraviolet finite up to (L−1)(L-1) loops.Comment: 28 pages, 5 figure

    Quantum convolutional data-syndrome codes

    Full text link
    We consider performance of a simple quantum convolutional code in a fault-tolerant regime using several syndrome measurement/decoding strategies and three different error models, including the circuit model.Comment: Abstract submitted for The 20th IEEE International Workshop on Signal Processing Advances in Wireless Communications (SPAWC 2019

    Target-Tailored Source-Transformation for Scene Graph Generation

    Get PDF
    Scene graph generation aims to provide a semantic and structural description of an image, denoting the objects (with nodes) and their relationships (with edges). The best performing works to date are based on exploiting the context surrounding objects or relations,e.g., by passing information among objects. In these approaches, to transform the representation of source objects is a critical process for extracting information for the use by target objects. In this work, we argue that a source object should give what tar-get object needs and give different objects different information rather than contributing common information to all targets. To achieve this goal, we propose a Target-TailoredSource-Transformation (TTST) method to efficiently propagate information among object proposals and relations. Particularly, for a source object proposal which will contribute information to other target objects, we transform the source object feature to the target object feature domain by simultaneously taking both the source and target into account. We further explore more powerful representations by integrating language prior with the visual context in the transformation for the scene graph generation. By doing so the target object is able to extract target-specific information from the source object and source relation accordingly to refine its representation. Our framework is validated on the Visual Genome bench-mark and demonstrated its state-of-the-art performance for the scene graph generation. The experimental results show that the performance of object detection and visual relation-ship detection are promoted mutually by our method
    • …
    corecore