15,680 research outputs found

    Shifting numbers in triangulated categories via bounded t-structures

    Full text link
    The shifting numbers measure the asymptotic amount by which an endofunctor of a triangulated category translates inside the category, and are analogous to Poincare translation numbers that are widely used in dynamical systems. One of the ways to define these invariants is via the phase functions of Bridgeland stability conditions. We show in this short note that the shifting numbers can also be defined via the bounded t-structures. In particular, the full package of stability conditions (a bounded t-structure, and a central charge on a charge lattice) is not necessary for the purpose of computing the shifting numbers

    Systolic inequalities for K3 surfaces via stability conditions

    Full text link
    We introduce the notions of categorical systoles and categorical volumes of Bridgeland stability conditions on triangulated categories. We prove that for any projective K3 surface, there exists a constant C depending only on the rank and discriminant of its Picard group, such that sys(σ)2≤C⋅vol(σ)\mathrm{sys}(\sigma)^2\leq C\cdot\mathrm{vol}(\sigma) holds for any stability condition on the derived category of coherent sheaves on the K3 surface. This is an algebro-geometric generalization of a classical systolic inequality on two-tori. We also discuss applications of this inequality in symplectic geometry.Comment: 23 pages; major improvement: remove the condition of Picard rank on

    Multilabel Consensus Classification

    Full text link
    In the era of big data, a large amount of noisy and incomplete data can be collected from multiple sources for prediction tasks. Combining multiple models or data sources helps to counteract the effects of low data quality and the bias of any single model or data source, and thus can improve the robustness and the performance of predictive models. Out of privacy, storage and bandwidth considerations, in certain circumstances one has to combine the predictions from multiple models or data sources to obtain the final predictions without accessing the raw data. Consensus-based prediction combination algorithms are effective for such situations. However, current research on prediction combination focuses on the single label setting, where an instance can have one and only one label. Nonetheless, data nowadays are usually multilabeled, such that more than one label have to be predicted at the same time. Direct applications of existing prediction combination methods to multilabel settings can lead to degenerated performance. In this paper, we address the challenges of combining predictions from multiple multilabel classifiers and propose two novel algorithms, MLCM-r (MultiLabel Consensus Maximization for ranking) and MLCM-a (MLCM for microAUC). These algorithms can capture label correlations that are common in multilabel classifications, and optimize corresponding performance metrics. Experimental results on popular multilabel classification tasks verify the theoretical analysis and effectiveness of the proposed methods
    • …
    corecore