14 research outputs found
MobileNetV2: Inverted Residuals and Linear Bottlenecks
In this paper we describe a new mobile architecture, MobileNetV2, that
improves the state of the art performance of mobile models on multiple tasks
and benchmarks as well as across a spectrum of different model sizes. We also
describe efficient ways of applying these mobile models to object detection in
a novel framework we call SSDLite. Additionally, we demonstrate how to build
mobile semantic segmentation models through a reduced form of DeepLabv3 which
we call Mobile DeepLabv3.
The MobileNetV2 architecture is based on an inverted residual structure where
the input and output of the residual block are thin bottleneck layers opposite
to traditional residual models which use expanded representations in the input
an MobileNetV2 uses lightweight depthwise convolutions to filter features in
the intermediate expansion layer. Additionally, we find that it is important to
remove non-linearities in the narrow layers in order to maintain
representational power. We demonstrate that this improves performance and
provide an intuition that led to this design. Finally, our approach allows
decoupling of the input/output domains from the expressiveness of the
transformation, which provides a convenient framework for further analysis. We
measure our performance on Imagenet classification, COCO object detection, VOC
image segmentation. We evaluate the trade-offs between accuracy, and number of
operations measured by multiply-adds (MAdd), as well as the number of
parameter
Antimatter interferometry for gravity measurements
We describe a light-pulse atom interferometer that is suitable for any
species of atom and even for electrons and protons as well as their
antiparticles, in particular for testing the Einstein equivalence principle
with antihydrogen. The design obviates the need for resonant lasers through
far-off resonant Bragg beam splitters and makes efficient use of scarce atoms
by magnetic confinement and atom recycling. We expect to reach an initial
accuracy of better than 1% for the acceleration of free fall of antihydrogen,
which can be improved to the part-per million level.Comment: 5 pages, 4 figures. Minor changes, accepted for PR
Non-discriminative data or weak model? On the relative importance of data and model resolution
We explore the question of how the resolution of the input image ("input
resolution") affects the performance of a neural network when compared to the
resolution of the hidden layers ("internal resolution"). Adjusting these
characteristics is frequently used as a hyperparameter providing a trade-off
between model performance and accuracy. An intuitive interpretation is that the
reduced information content in the low-resolution input causes decay in the
accuracy. In this paper, we show that up to a point, the input resolution alone
plays little role in the network performance, and it is the internal resolution
that is the critical driver of model quality. We then build on these insights
to develop novel neural network architectures that we call \emph{Isometric
Neural Networks}. These models maintain a fixed internal resolution throughout
their entire depth. We demonstrate that they lead to high accuracy models with
low activation footprint and parameter count.Comment: ICCV 2019 Workshop on Real-World Recognition from Low-Quality Images
and Video