1,303 research outputs found

    ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices

    Full text link
    We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8%) than recent MobileNet on ImageNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves ~13x actual speedup over AlexNet while maintaining comparable accuracy

    Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing

    Full text link
    Deep neural networks (DNN) have been shown to be useful in a wide range of applications. However, they are also known to be vulnerable to adversarial samples. By transforming a normal sample with some carefully crafted human imperceptible perturbations, even highly accurate DNN make wrong decisions. Multiple defense mechanisms have been proposed which aim to hinder the generation of such adversarial samples. However, a recent work show that most of them are ineffective. In this work, we propose an alternative approach to detect adversarial samples at runtime. Our main observation is that adversarial samples are much more sensitive than normal samples if we impose random mutations on the DNN. We thus first propose a measure of `sensitivity' and show empirically that normal samples and adversarial samples have distinguishable sensitivity. We then integrate statistical hypothesis testing and model mutation testing to check whether an input sample is likely to be normal or adversarial at runtime by measuring its sensitivity. We evaluated our approach on the MNIST and CIFAR10 datasets. The results show that our approach detects adversarial samples generated by state-of-the-art attacking methods efficiently and accurately.Comment: Accepted by ICSE 201

    Machine Learning Inspired Energy-Efficient Hybrid Precoding for MmWave Massive MIMO Systems

    Full text link
    Hybrid precoding is a promising technique for mmWave massive MIMO systems, as it can considerably reduce the number of required radio-frequency (RF) chains without obvious performance loss. However, most of the existing hybrid precoding schemes require a complicated phase shifter network, which still involves high energy consumption. In this paper, we propose an energy-efficient hybrid precoding architecture, where the analog part is realized by a small number of switches and inverters instead of a large number of high-resolution phase shifters. Our analysis proves that the performance gap between the proposed hybrid precoding architecture and the traditional one is small and keeps constant when the number of antennas goes to infinity. Then, inspired by the cross-entropy (CE) optimization developed in machine learning, we propose an adaptive CE (ACE)-based hybrid precoding scheme for this new architecture. It aims to adaptively update the probability distributions of the elements in hybrid precoder by minimizing the CE, which can generate a solution close to the optimal one with a sufficiently high probability. Simulation results verify that our scheme can achieve the near-optimal sum-rate performance and much higher energy efficiency than traditional schemes.Comment: This paper has been accepted by IEEE ICC 2017. The simulation codes are provided to reproduce the results in this paper at: http://oa.ee.tsinghua.edu.cn/dailinglong/publications/publications.htm

    The C-polynomial of a knot

    Full text link
    In an earlier paper the first author defined a non-commutative A-polynomial for knots in 3-space, using the colored Jones function. The idea is that the colored Jones function of a knot satisfies a non-trivial linear q-difference equation. Said differently, the colored Jones function of a knot is annihilated by a non-zero ideal of the Weyl algebra which is generalted (after localization) by the non-commutative A-polynomial of a knot. In that paper, it was conjectured that this polynomial (which has to do with representations of the quantum group U_q(SL_2)) specializes at q=1 to the better known A-polynomial of a knot, which has to do with genuine SL_2(C) representations of the knot complement. Computing the non-commutative A-polynomial of a knot is a difficult task which so far has been achieved for the two simplest knots. In the present paper, we introduce the C-polynomial of a knot, along with its non-commutative version, and give an explicit computation for all twist knots. In a forthcoming paper, we will use this information to compute the non-commutative A-polynomial of twist knots. Finally, we formulate a number of conjectures relating the A, the C-polynomial and the Alexander polynomial, all confirmed for the class of twist knots.Comment: This is the version published by Algebraic & Geometric Topology on 11 October 200
    • …
    corecore