247 research outputs found
Fault-propagation Pattern Based DFA on SPN Structure Block Ciphers using Bitwise Permutation, with Application to PRESENT and PRINTcipher
This paper proposes a novel fault-propagation pattern based differential fault analysis method - FPP-DFA, and proves its feasibility on SPN structure block ciphers using bitwise permutation, such as PRESENT and PRINTcipher. Simulated experiments demonstrate that, with the fault model of injecting one nibble fault into the r-2th round substitution layer, on average 8 and 16 faulty samples can reduce the master key search space of PRESENT-80/128 to and respectively, and 12 and 24 effective faulty samples can reduce the master key search space of PRINTcipher-48/96 to and respectively; with the fault model of injecting one nibble fault into the r-3th round substitution layer, 8 samples can reduce the master key search space of PRINTCipher-96 to
SegNeXt: Rethinking Convolutional Attention Design for Semantic Segmentation
We present SegNeXt, a simple convolutional network architecture for semantic
segmentation. Recent transformer-based models have dominated the field of
semantic segmentation due to the efficiency of self-attention in encoding
spatial information. In this paper, we show that convolutional attention is a
more efficient and effective way to encode contextual information than the
self-attention mechanism in transformers. By re-examining the characteristics
owned by successful segmentation models, we discover several key components
leading to the performance improvement of segmentation models. This motivates
us to design a novel convolutional attention network that uses cheap
convolutional operations. Without bells and whistles, our SegNeXt significantly
improves the performance of previous state-of-the-art methods on popular
benchmarks, including ADE20K, Cityscapes, COCO-Stuff, Pascal VOC, Pascal
Context, and iSAID. Notably, SegNeXt outperforms EfficientNet-L2 w/ NAS-FPN and
achieves 90.6% mIoU on the Pascal VOC 2012 test leaderboard using only 1/10
parameters of it. On average, SegNeXt achieves about 2.0% mIoU improvements
compared to the state-of-the-art methods on the ADE20K datasets with the same
or fewer computations. Code is available at https://github.com/uyzhang/JSeg
(Jittor) and https://github.com/Visual-Attention-Network/SegNeXt (Pytorch).Comment: SegNeXt, a simple CNN for semantic segmentation. Code is availabl
- …