19,681 research outputs found
Stochastic Attraction-Repulsion Embedding for Large Scale Image Localization
This paper tackles the problem of large-scale image-based localization (IBL)
where the spatial location of a query image is determined by finding out the
most similar reference images in a large database. For solving this problem, a
critical task is to learn discriminative image representation that captures
informative information relevant for localization. We propose a novel
representation learning method having higher location-discriminating power. It
provides the following contributions: 1) we represent a place (location) as a
set of exemplar images depicting the same landmarks and aim to maximize
similarities among intra-place images while minimizing similarities among
inter-place images; 2) we model a similarity measure as a probability
distribution on L_2-metric distances between intra-place and inter-place image
representations; 3) we propose a new Stochastic Attraction and Repulsion
Embedding (SARE) loss function minimizing the KL divergence between the learned
and the actual probability distributions; 4) we give theoretical comparisons
between SARE, triplet ranking and contrastive losses. It provides insights into
why SARE is better by analyzing gradients. Our SARE loss is easy to implement
and pluggable to any CNN. Experiments show that our proposed method improves
the localization performance on standard benchmarks by a large margin.
Demonstrating the broad applicability of our method, we obtained the third
place out of 209 teams in the 2018 Google Landmark Retrieval Challenge. Our
code and model are available at https://github.com/Liumouliu/deepIBL.Comment: ICC
Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices
A recent trend in DNN development is to extend the reach of deep learning
applications to platforms that are more resource and energy constrained, e.g.,
mobile devices. These endeavors aim to reduce the DNN model size and improve
the hardware processing efficiency, and have resulted in DNNs that are much
more compact in their structures and/or have high data sparsity. These compact
or sparse models are different from the traditional large ones in that there is
much more variation in their layer shapes and sizes, and often require
specialized hardware to exploit sparsity for performance improvement. Thus,
many DNN accelerators designed for large DNNs do not perform well on these
models. In this work, we present Eyeriss v2, a DNN accelerator architecture
designed for running compact and sparse DNNs. To deal with the widely varying
layer shapes and sizes, it introduces a highly flexible on-chip network, called
hierarchical mesh, that can adapt to the different amounts of data reuse and
bandwidth requirements of different data types, which improves the utilization
of the computation resources. Furthermore, Eyeriss v2 can process sparse data
directly in the compressed domain for both weights and activations, and
therefore is able to improve both processing speed and energy efficiency with
sparse models. Overall, with sparse MobileNet, Eyeriss v2 in a 65nm CMOS
process achieves a throughput of 1470.6 inferences/sec and 2560.3 inferences/J
at a batch size of 1, which is 12.6x faster and 2.5x more energy efficient than
the original Eyeriss running MobileNet. We also present an analysis methodology
called Eyexam that provides a systematic way of understanding the performance
limits for DNN processors as a function of specific characteristics of the DNN
model and accelerator design; it applies these characteristics as sequential
steps to increasingly tighten the bound on the performance limits.Comment: accepted for publication in IEEE Journal on Emerging and Selected
Topics in Circuits and Systems. This extended version on arXiv also includes
Eyexam in the appendi
Joint and individual analysis of breast cancer histologic images and genomic covariates
A key challenge in modern data analysis is understanding connections between
complex and differing modalities of data. For example, two of the main
approaches to the study of breast cancer are histopathology (analyzing visual
characteristics of tumors) and genetics. While histopathology is the gold
standard for diagnostics and there have been many recent breakthroughs in
genetics, there is little overlap between these two fields. We aim to bridge
this gap by developing methods based on Angle-based Joint and Individual
Variation Explained (AJIVE) to directly explore similarities and differences
between these two modalities. Our approach exploits Convolutional Neural
Networks (CNNs) as a powerful, automatic method for image feature extraction to
address some of the challenges presented by statistical analysis of
histopathology image data. CNNs raise issues of interpretability that we
address by developing novel methods to explore visual modes of variation
captured by statistical algorithms (e.g. PCA or AJIVE) applied to CNN features.
Our results provide many interpretable connections and contrasts between
histopathology and genetics
MORPH: A Reference Architecture for Configuration and Behaviour Self-Adaptation
An architectural approach to self-adaptive systems involves runtime change of
system configuration (i.e., the system's components, their bindings and
operational parameters) and behaviour update (i.e., component orchestration).
Thus, dynamic reconfiguration and discrete event control theory are at the
heart of architectural adaptation. Although controlling configuration and
behaviour at runtime has been discussed and applied to architectural
adaptation, architectures for self-adaptive systems often compound these two
aspects reducing the potential for adaptability. In this paper we propose a
reference architecture that allows for coordinated yet transparent and
independent adaptation of system configuration and behaviour
- …