132 research outputs found

    C*-extreme Entanglement Breaking Maps On Operator Systems

    Full text link
    Let E\mathcal E denote the set of all unital entanglement breaking (UEB) linear maps defined on an operator system SβŠ‚Md\mathcal S \subset M_d and, mapping into MnM_n. As it turns out, the set E\mathcal E is not only convex in the classical sense but also in a quantum sense, namely it is Cβˆ—C^*-convex. The main objective of this article is to describe the Cβˆ—C^*-extreme points of this set E\mathcal E. By observing that every EB map defined on the operator system S\mathcal S dilates to a positive map with commutative range and also extends to an EB map on MdM_d, We show that the Cβˆ—C^*-extreme points of the set E\mathcal E are precisely the UEB maps that are maximal in the sense of Arveson (\cite{A} and \cite{A69}) and that they are also exactly the linear extreme points of the set E\mathcal E with commutative range. We also determine their explicit structure, thereby obtaining operator system generalizations of the analogous structure theorem and the Krein-Milman type theorem given in \cite{BDMS}. As a consequence, we show that Cβˆ—C^*-extreme (UEB) maps in E\mathcal E extend to Cβˆ—C^*-extreme UEB maps on the full algebra. Finally, we obtain an improved version of the main result in \cite{BDMS}, which contains various characterizations of Cβˆ—C^*-extreme UEB maps between the algebras MdM_d and MnM_n.Comment: This is part of the second named author's ongoing doctoral thesis work. Comments and feedback are welcom

    Towards Improved Input Masking for Convolutional Neural Networks

    Full text link
    The ability to remove features from the input of machine learning models is very important to understand and interpret model predictions. However, this is non-trivial for vision models since masking out parts of the input image typically causes large distribution shifts. This is because the baseline color used for masking (typically grey or black) is out of distribution. Furthermore, the shape of the mask itself can contain unwanted signals which can be used by the model for its predictions. Recently, there has been some progress in mitigating this issue (called missingness bias) in image masking for vision transformers. In this work, we propose a new masking method for CNNs we call layer masking in which the missingness bias caused by masking is reduced to a large extent. Intuitively, layer masking applies a mask to intermediate activation maps so that the model only processes the unmasked input. We show that our method (i) is able to eliminate or minimize the influence of the mask shape or color on the output of the model, and (ii) is much better than replacing the masked region by black or grey for input perturbation based interpretability techniques like LIME. Thus, layer masking is much less affected by missingness bias than other masking strategies. We also demonstrate how the shape of the mask may leak information about the class, thus affecting estimates of model reliance on class-relevant features derived from input masking. Furthermore, we discuss the role of data augmentation techniques for tackling this problem, and argue that they are not sufficient for preventing model reliance on mask shape. The code for this project is publicly available at https://github.com/SriramB-98/layer_maskingComment: 29 pages, 19 figures. Accepted at ICCV 202
    • …
    corecore