5,140 research outputs found

    Stopping Set Distributions of Some Linear Codes

    Full text link
    Stopping sets and stopping set distribution of an low-density parity-check code are used to determine the performance of this code under iterative decoding over a binary erasure channel (BEC). Let CC be a binary [n,k][n,k] linear code with parity-check matrix HH, where the rows of HH may be dependent. A stopping set SS of CC with parity-check matrix HH is a subset of column indices of HH such that the restriction of HH to SS does not contain a row of weight one. The stopping set distribution {Ti(H)}i=0n\{T_i(H)\}_{i=0}^n enumerates the number of stopping sets with size ii of CC with parity-check matrix HH. Note that stopping sets and stopping set distribution are related to the parity-check matrix HH of CC. Let Hβˆ—H^{*} be the parity-check matrix of CC which is formed by all the non-zero codewords of its dual code CβŠ₯C^{\perp}. A parity-check matrix HH is called BEC-optimal if Ti(H)=Ti(Hβˆ—),i=0,1,...,nT_i(H)=T_i(H^*), i=0,1,..., n and HH has the smallest number of rows. On the BEC, iterative decoder of CC with BEC-optimal parity-check matrix is an optimal decoder with much lower decoding complexity than the exhaustive decoder. In this paper, we study stopping sets, stopping set distributions and BEC-optimal parity-check matrices of binary linear codes. Using finite geometry in combinatorics, we obtain BEC-optimal parity-check matrices and then determine the stopping set distributions for the Simplex codes, the Hamming codes, the first order Reed-Muller codes and the extended Hamming codes.Comment: 33 pages, submitted to IEEE Trans. Inform. Theory, Feb. 201

    Multi-scale Deep Learning Architectures for Person Re-identification

    Full text link
    Person Re-identification (re-id) aims to match people across non-overlapping camera views in a public space. It is a challenging problem because many people captured in surveillance videos wear similar clothes. Consequently, the differences in their appearance are often subtle and only detectable at the right location and scales. Existing re-id models, particularly the recently proposed deep learning based ones match people at a single scale. In contrast, in this paper, a novel multi-scale deep learning model is proposed. Our model is able to learn deep discriminative feature representations at different scales and automatically determine the most suitable scales for matching. The importance of different spatial locations for extracting discriminative features is also learned explicitly. Experiments are carried out to demonstrate that the proposed model outperforms the state-of-the art on a number of benchmarksComment: 9 pages, 3 figures, accepted by ICCV 201

    Tucker Bilinear Attention Network for Multi-scale Remote Sensing Object Detection

    Full text link
    Object detection on VHR remote sensing images plays a vital role in applications such as urban planning, land resource management, and rescue missions. The large-scale variation of the remote-sensing targets is one of the main challenges in VHR remote-sensing object detection. Existing methods improve the detection accuracy of high-resolution remote sensing objects by improving the structure of feature pyramids and adopting different attention modules. However, for small targets, there still be seriously missed detections due to the loss of key detail features. There is still room for improvement in the way of multiscale feature fusion and balance. To address this issue, this paper proposes two novel modules: Guided Attention and Tucker Bilinear Attention, which are applied to the stages of early fusion and late fusion respectively. The former can effectively retain clean key detail features, and the latter can better balance features through semantic-level correlation mining. Based on two modules, we build a new multi-scale remote sensing object detection framework. No bells and whistles. The proposed method largely improves the average precisions of small objects and achieves the highest mean average precisions compared with 9 state-of-the-art methods on DOTA, DIOR, and NWPU VHR-10.Code and models are available at https://github.com/Shinichict/GTNet.Comment: arXiv admin note: text overlap with arXiv:1705.06676, arXiv:2209.13351 by other author
    • …
    corecore