2,260 research outputs found

    On the topology of conformally compact Einstein 4-manifolds

    Full text link
    In this paper we study the topology of conformally compact Einstein 4-manifolds. When the conformal infinity has positive Yamabe invariant and the renormalized volume is also positive we show that the conformally compact Einstein 4-manifold will have at most finite fundamental group. Under the further assumption that the renormalized volume is relatively large, we conclude that the conformally compact Einstein 4-manifold is diffeomorphic to B4B^4 and its conformal infinity is diffeomorphic to S3S^3.Comment: 16 page

    Some Progress in Conformal Geometry

    Get PDF
    This is a survey paper of our current research on the theory of partial differential equations in conformal geometry. Our intention is to describe some of our current works in a rather brief and expository fashion. We are not giving a comprehensive survey on the subject and references cited here are not intended to be complete. We introduce a bubble tree structure to study the degeneration of a class of Yamabe metrics on Bach flat manifolds satisfying some global conformal bounds on compact manifolds of dimension 4. As applications, we establish a gap theorem, a finiteness theorem for diffeomorphism type for this class, and diameter bound of the σ2\sigma_2-metrics in a class of conformal 4-manifolds. For conformally compact Einstein metrics we introduce an eigenfunction compactification. As a consequence we obtain some topological constraints in terms of renormalized volumes.Comment: This is a contribution to the Proceedings of the 2007 Midwest Geometry Conference in honor of Thomas P. Branson, published in SIGMA (Symmetry, Integrability and Geometry: Methods and Applications) at http://www.emis.de/journals/SIGMA

    MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks

    Full text link
    Some recent works revealed that deep neural networks (DNNs) are vulnerable to so-called adversarial attacks where input examples are intentionally perturbed to fool DNNs. In this work, we revisit the DNN training process that includes adversarial examples into the training dataset so as to improve DNN's resilience to adversarial attacks, namely, adversarial training. Our experiments show that different adversarial strengths, i.e., perturbation levels of adversarial examples, have different working zones to resist the attack. Based on the observation, we propose a multi-strength adversarial training method (MAT) that combines the adversarial training examples with different adversarial strengths to defend adversarial attacks. Two training structures - mixed MAT and parallel MAT - are developed to facilitate the tradeoffs between training time and memory occupation. Our results show that MAT can substantially minimize the accuracy degradation of deep learning systems to adversarial attacks on MNIST, CIFAR-10, CIFAR-100, and SVHN.Comment: 6 pages, 4 figures, 2 table
    corecore