5 research outputs found
RT3D: Achieving Real-Time Execution of 3D Convolutional Neural Networks on Mobile Devices
Mobile devices are becoming an important carrier for deep learning tasks, as
they are being equipped with powerful, high-end mobile CPUs and GPUs. However,
it is still a challenging task to execute 3D Convolutional Neural Networks
(CNNs) targeting for real-time performance, besides high inference accuracy.
The reason is more complex model structure and higher model dimensionality
overwhelm the available computation/storage resources on mobile devices. A
natural way may be turning to deep learning weight pruning techniques. However,
the direct generalization of existing 2D CNN weight pruning methods to 3D CNNs
is not ideal for fully exploiting mobile parallelism while achieving high
inference accuracy.
This paper proposes RT3D, a model compression and mobile acceleration
framework for 3D CNNs, seamlessly integrating neural network weight pruning and
compiler code generation techniques. We propose and investigate two structured
sparsity schemes i.e., the vanilla structured sparsity and kernel group
structured (KGS) sparsity that are mobile acceleration friendly. The vanilla
sparsity removes whole kernel groups, while KGS sparsity is a more fine-grained
structured sparsity that enjoys higher flexibility while exploiting full
on-device parallelism. We propose a reweighted regularization pruning algorithm
to achieve the proposed sparsity schemes. The inference time speedup due to
sparsity is approaching the pruning rate of the whole model FLOPs (floating
point operations). RT3D demonstrates up to 29.1 speedup in end-to-end
inference time comparing with current mobile frameworks supporting 3D CNNs,
with moderate 1%-1.5% accuracy loss. The end-to-end inference time for 16 video
frames could be within 150 ms, when executing representative C3D and R(2+1)D
models on a cellphone. For the first time, real-time execution of 3D CNNs is
achieved on off-the-shelf mobiles.Comment: To appear in Proceedings of the 35th AAAI Conference on Artificial
Intelligence (AAAI-21
Towards Real-Time Segmentation on the Edge
The research in real-time segmentation mainly focuses on desktop GPUs.
However, autonomous driving and many other applications rely on real-time segmentation on the edge, and current arts are far from the goal.
In addition, recent advances in vision transformers also inspire us to re-design the network architecture for dense prediction task.
In this work, we propose to combine the self attention block with lightweight convolutions to form new building blocks, and employ latency constraints to search an efficient sub-network.
We train an MLP latency model based on generated architecture configurations and their latency measured on mobile devices, so that we can predict the latency of subnets during search phase.
To the best of our knowledge, we are the first to achieve over 74% mIoU on Cityscapes with semi-real-time inference (over 15 FPS) on mobile GPU from an off-the-shelf phone
Human Infection with Highly Pathogenic Avian Influenza A(H7N9) Virus, China
The recent increase in zoonotic avian influenza A(H7N9) disease in China is a cause of public health concern. Most of the A(H7N9) viruses previously reported have been of low pathogenicity. We report the fatal case of a patient in China who was infected with an A(H7N9) virus having a polybasic amino acid sequence at its hemagglutinin cleavage site (PEVPKRKRTAR/GL), a sequence suggestive of high pathogenicity in birds. Its neuraminidase also had R292K, an amino acid change known to be associated with neuraminidase inhibitor resistance. Both of these molecular features might have contributed to the patient’s adverse clinical outcome. The patient had a history of exposure to sick and dying poultry, and his close contacts had no evidence of A(H7N9) disease, suggesting human-to-human transmission did not occur. Enhanced surveillance is needed to determine whether this highly pathogenic avian influenza A(H7N9) virus will continue to spread