282 research outputs found

    Constraining the Higgs sector from False Vacua in the Next-to-Minimal Supersymmetric Standard Model

    Full text link
    We study the mass, the mixing and the coupling with ZZ boson of the lightest Higgs boson in the next-to-minimal supersymmetric standard model. The vacuum structure of the Higgs potential is analyzed and the new false vacua are discussed. The significant parameter region can be excluded by requiring that the realistic vacuum is deeper than false vacua, which result in constraints on the properties of the lightest Higgs boson.Comment: 23 pages, 8 figure

    Understanding Likelihood of Normalizing Flow and Image Complexity through the Lens of Out-of-Distribution Detection

    Full text link
    Out-of-distribution (OOD) detection is crucial to safety-critical machine learning applications and has been extensively studied. While recent studies have predominantly focused on classifier-based methods, research on deep generative model (DGM)-based methods have lagged relatively. This disparity may be attributed to a perplexing phenomenon: DGMs often assign higher likelihoods to unknown OOD inputs than to their known training data. This paper focuses on explaining the underlying mechanism of this phenomenon. We propose a hypothesis that less complex images concentrate in high-density regions in the latent space, resulting in a higher likelihood assignment in the Normalizing Flow (NF). We experimentally demonstrate its validity for five NF architectures, concluding that their likelihood is untrustworthy. Additionally, we show that this problem can be alleviated by treating image complexity as an independent variable. Finally, we provide evidence of the potential applicability of our hypothesis in another DGM, PixelCNN++.Comment: Accepted at AAAI-2

    Out-of-Distribution Detection with Reconstruction Error and Typicality-based Penalty

    Full text link
    The task of out-of-distribution (OOD) detection is vital to realize safe and reliable operation for real-world applications. After the failure of likelihood-based detection in high dimensions had been shown, approaches based on the \emph{typical set} have been attracting attention; however, they still have not achieved satisfactory performance. Beginning by presenting the failure case of the typicality-based approach, we propose a new reconstruction error-based approach that employs normalizing flow (NF). We further introduce a typicality-based penalty, and by incorporating it into the reconstruction error in NF, we propose a new OOD detection method, penalized reconstruction error (PRE). Because the PRE detects test inputs that lie off the in-distribution manifold, it effectively detects adversarial examples as well as OOD examples. We show the effectiveness of our method through the evaluation using natural image datasets, CIFAR-10, TinyImageNet, and ILSVRC2012.Comment: Accepted at WACV 202
    • …
    corecore