17,407 research outputs found
JALAD: Joint Accuracy- and Latency-Aware Deep Structure Decoupling for Edge-Cloud Execution
Recent years have witnessed a rapid growth of deep-network based services and
applications. A practical and critical problem thus has emerged: how to
effectively deploy the deep neural network models such that they can be
executed efficiently. Conventional cloud-based approaches usually run the deep
models in data center servers, causing large latency because a significant
amount of data has to be transferred from the edge of network to the data
center. In this paper, we propose JALAD, a joint accuracy- and latency-aware
execution framework, which decouples a deep neural network so that a part of it
will run at edge devices and the other part inside the conventional cloud,
while only a minimum amount of data has to be transferred between them. Though
the idea seems straightforward, we are facing challenges including i) how to
find the best partition of a deep structure; ii) how to deploy the component at
an edge device that only has limited computation power; and iii) how to
minimize the overall execution latency. Our answers to these questions are a
set of strategies in JALAD, including 1) A normalization based in-layer data
compression strategy by jointly considering compression rate and model
accuracy; 2) A latency-aware deep decoupling strategy to minimize the overall
execution latency; and 3) An edge-cloud structure adaptation strategy that
dynamically changes the decoupling for different network conditions.
Experiments demonstrate that our solution can significantly reduce the
execution latency: it speeds up the overall inference execution with a
guaranteed model accuracy loss.Comment: conference, copyright transfered to IEE
Compressively Sensed Image Recognition
Compressive Sensing (CS) theory asserts that sparse signal reconstruction is
possible from a small number of linear measurements. Although CS enables
low-cost linear sampling, it requires non-linear and costly reconstruction.
Recent literature works show that compressive image classification is possible
in CS domain without reconstruction of the signal. In this work, we introduce a
DCT base method that extracts binary discriminative features directly from CS
measurements. These CS measurements can be obtained by using (i) a random or a
pseudo-random measurement matrix, or (ii) a measurement matrix whose elements
are learned from the training data to optimize the given classification task.
We further introduce feature fusion by concatenating Bag of Words (BoW)
representation of our binary features with one of the two state-of-the-art
CNN-based feature vectors. We show that our fused feature outperforms the
state-of-the-art in both cases.Comment: 6 pages, submitted/accepted, EUVIP 201
Inferring Sparsity: Compressed Sensing using Generalized Restricted Boltzmann Machines
In this work, we consider compressed sensing reconstruction from
measurements of -sparse structured signals which do not possess a writable
correlation model. Assuming that a generative statistical model, such as a
Boltzmann machine, can be trained in an unsupervised manner on example signals,
we demonstrate how this signal model can be used within a Bayesian framework of
signal reconstruction. By deriving a message-passing inference for general
distribution restricted Boltzmann machines, we are able to integrate these
inferred signal models into approximate message passing for compressed sensing
reconstruction. Finally, we show for the MNIST dataset that this approach can
be very effective, even for .Comment: IEEE Information Theory Workshop, 201
Unifying and Merging Well-trained Deep Neural Networks for Inference Stage
We propose a novel method to merge convolutional neural-nets for the
inference stage. Given two well-trained networks that may have different
architectures that handle different tasks, our method aligns the layers of the
original networks and merges them into a unified model by sharing the
representative codes of weights. The shared weights are further re-trained to
fine-tune the performance of the merged model. The proposed method effectively
produces a compact model that may run original tasks simultaneously on
resource-limited devices. As it preserves the general architectures and
leverages the co-used weights of well-trained networks, a substantial training
overhead can be reduced to shorten the system development time. Experimental
results demonstrate a satisfactory performance and validate the effectiveness
of the method.Comment: To appear in the 27th International Joint Conference on Artificial
Intelligence and the 23rd European Conference on Artificial Intelligence,
2018. (IJCAI-ECAI 2018
- …