258 research outputs found

    Localizing Object by Using only Image-level Labels

    Get PDF
    Weakly Supervised Object Localization (WSOL) task attracts more and more attention in recent years, which aims to locate the object by using incomplete labels. Considering the cost of annotation, especially ground-truth bounding box label and training speed of detection task, it is very necessary to improve the performance of WSOL that only requires image-level labels. Most current methods tend to utilize Class Activation Map (CAM) that can only highlight the most discriminative parts rather than the entire target. The common method to address this kind of limitation is to hide the most obvious regions and let the model learn other parts of the target. The main work of this thesis is to eliminate the limitations of current WSOL work and improve the performance of localization. In chapter 3, we design an attention-based selection strategy to dynamically hide the feature maps. In chapter 4, a new hiding method is proposed to further improve the localization performance. In chapter 5, we propose three method to eliminate the issues on CAM level. Our methods are evaluated on CUB-200-2011 and ILSVRC 2016 datasets. Experiments demonstrate that the proposed methods work very well and significantly improve the localization performance

    An SVP attack on Vortex

    Get PDF
    In [BS22], the authors proposed a lattice based hash function that is useful for building zero-knowledge proofs with superior performance. In this short note we analysis the underlying lattice problem with the classic shortest vector problem, and show that 2 out of 15 proposed parameter sets for this hash function do not achieve the claimed security

    LLL for ideal lattices re-evaluation of the security of Gentry-Halevi\u27s FHE scheme

    Get PDF
    The LLL algorithm, named after its inventors, Lenstra, Lenstra and Lovász, is one of themost popular lattice reduction algorithms in the literature. In this paper, we propose the first variant of LLL algorithm that is dedicated for ideal lattices, namely, the iLLL algorithm. Our iLLL algorithm takes advantage of the fact that within LLL procedures, previously reduced vectors can be re-used for further reductions. Using this method, we prove that the iLLL is at least as fast as the LLL algorithm, and it outputs a basis with the same quality. We also provide a heuristic approach that accelerates the re-use method. As a result, in practice, our algorithm can be approximately eight times faster than LLL algorithm for typical scenarios where lattice dimension is between 100 and 150. When applying our algorithm to the Gentry–Halevi’s fully homomorphic challenges, we are able to solve the toy challenge within 24 days using a 2.66GHz CPU, while with the classical LLL algorithm, it takes 32 days. Further, assuming a 4.0GHz CPU, we predict to reduce the basis in 15.7 years for the small challenges, while previous best prediction was 45 years

    Benchmarking Omni-Vision Representation through the Lens of Visual Realms

    Full text link
    Though impressive performance has been achieved in specific visual realms (e.g. faces, dogs, and places), an omni-vision representation generalizing to many natural visual domains is highly desirable. But, existing benchmarks are biased and inefficient to evaluate the omni-vision representation -- these benchmarks either only include several specific realms, or cover most realms at the expense of subsuming numerous datasets that have extensive realm overlapping. In this paper, we propose Omni-Realm Benchmark (OmniBenchmark). It includes 21 realm-wise datasets with 7,372 concepts and 1,074,346 images. Without semantic overlapping, these datasets cover most visual realms comprehensively and meanwhile efficiently. In addition, we propose a new supervised contrastive learning framework, namely Relational Contrastive learning (ReCo), for a better omni-vision representation. Beyond pulling two instances from the same concept closer -- the typical supervised contrastive learning framework -- ReCo also pulls two instances from the same semantic realm closer, encoding the semantic relation between concepts, and facilitating omni-vision representation learning. We benchmark ReCo and other advances in omni-vision representation studies that are different in architectures (from CNNs to transformers) and in learning paradigms (from supervised learning to self-supervised learning) on OmniBenchmark. We illustrate the superior of ReCo to other supervised contrastive learning methods and reveal multiple practical observations to facilitate future research.Comment: In ECCV 2022; The project page at https://zhangyuanhan-ai.github.io/OmniBenchmar

    Jolt-b: recursion friendly Jolt with basefold commitment

    Get PDF
    oai:eprint.iacr.org:2024/1131The authors of Jolt [AST24] pioneered a unique method for creating zero-knowledge virtual machines, known as the lookup singularity. This technique extensively uses lookup tables to create virtual machine circuits. Despite Jolt’s performance being twice as efficient as the previous state-of-the-art1 , there is potential for further enhancement. The initial release of Jolt uses Spartan [Set20] and Hyrax [WTs+ 18] as their backend, leading to two constraints. First, Hyrax employs Pedersen commitment to build inner product arguments, which requires elliptic curve operations. Second, the verification of a Hyrax commitment takes square root time O(N)O(\sqrt{N}) relative to the circuit size NN . This makes the recursive verification of a Jolt proof impractical, as the verification circuit would need to execute all the Hyrax verification logic in-circuit. A later version of Jolt includes Zeromorph [KT23] and HyperKZG as their commitment backend, making the system recursion-friendly, as now the recursive verifier only needs to perform O(logN)O(\log N) operations, but at the expense of a need for a trusted setup. Our scheme, Jolt-b, addresses these issues by transitioning to the extension field of the Goldilocks and using the Basefold commitment scheme [ZCF23], which has an O(log2N)O(\log^2 N) verifier time. This scheme mirrors the modifications of Plonky2 over the original Plonk [GWC19]: it transitions from EC fields to the Goldilocks field; it replaces the EC-based commitment scheme with an encoding-based commitment scheme. We implemented Jolt-b, along with an optimized version of the Basefold scheme. Our benchmarks show that at a cost of 2.47x slowdown for the prover, we achieve recursion friendliness for the original Jolt. In comparison with other recursion-friendly Jolt variants, our scheme is 1.24x and 1.52x faster in prover time than the Zeromorph and HyperKZG variants of Jolt, respectively

    Optimizing polynomial convolution for NTRUEncrypt

    Get PDF
    NTRUEncrypt is one of the most promising candidates for quantum-safe cryptography. In this paper, we focus on the NTRU743 paramter set. We give a report on all known attacks against this parameter set and show that it delivers 256 bits of security against classical attackers and 128 bits of security against quantum attackers. We then present a parameter-dependent optimization using a tailored hierarchy of multipli- cation algorithms as well as the Intel AVX2 instructions, and show that this optimization is constant-time. Our implementation is two to three times faster than the reference implementation of NTRUEncrypt

    Bandersnatch: a fast elliptic curve built over the BLS12-381 scalar field

    Get PDF
    In this short note, we introduce Bandersnatch, a new elliptic curve built over the BLS12-381 scalar field. The curve is equipped with an efficient endomorphism, allowing a fast scalar multiplication algorithm. Our benchmark shows that the multiplication is 42% faster, compared to another curve, called Jubjub, having similar properties. Nonetheless, Bandersnatch does not provide any performance improvement for either rank 1 constraint systems (R1CS) or multi scalar multiplications, compared to the Jubjub curve
    corecore