312 research outputs found

    The Weight Distributions of a Class of Cyclic Codes with Three Nonzeros over F3

    Full text link
    Cyclic codes have efficient encoding and decoding algorithms. The decoding error probability and the undetected error probability are usually bounded by or given from the weight distributions of the codes. Most researches are about the determination of the weight distributions of cyclic codes with few nonzeros, by using quadratic form and exponential sum but limited to low moments. In this paper, we focus on the application of higher moments of the exponential sum to determine the weight distributions of a class of ternary cyclic codes with three nonzeros, combining with not only quadratic form but also MacWilliams' identities. Another application of this paper is to emphasize the computer algebra system Magma for the investigation of the higher moments. In the end, the result is verified by one example using Matlab.Comment: 10 pages, 3 table

    Deep Learning Face Attributes in the Wild

    Full text link
    Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts.Comment: To appear in International Conference on Computer Vision (ICCV) 201
    • …
    corecore