6,333 research outputs found
Universally-composable finite-key analysis for efficient four-intensity decoy-state quantum key distribution
We propose an efficient four-intensity decoy-state BB84 protocol and derive
concise security bounds for this protocol with the universally composable
finite-key analysis method. Comparing with the efficient three-intensity
protocol, we find that our efficient four-intensity protocol can increase the
secret key rate by at least . Particularly, this increasing rate of
secret key rate will be raised as the transmission distance increases. At a
large transmission distance, our efficient four-intensity protocol can improve
the performance of quantum key distribution profoundly.Comment: accepted by Eur. Phys. J.
JALAD: Joint Accuracy- and Latency-Aware Deep Structure Decoupling for Edge-Cloud Execution
Recent years have witnessed a rapid growth of deep-network based services and
applications. A practical and critical problem thus has emerged: how to
effectively deploy the deep neural network models such that they can be
executed efficiently. Conventional cloud-based approaches usually run the deep
models in data center servers, causing large latency because a significant
amount of data has to be transferred from the edge of network to the data
center. In this paper, we propose JALAD, a joint accuracy- and latency-aware
execution framework, which decouples a deep neural network so that a part of it
will run at edge devices and the other part inside the conventional cloud,
while only a minimum amount of data has to be transferred between them. Though
the idea seems straightforward, we are facing challenges including i) how to
find the best partition of a deep structure; ii) how to deploy the component at
an edge device that only has limited computation power; and iii) how to
minimize the overall execution latency. Our answers to these questions are a
set of strategies in JALAD, including 1) A normalization based in-layer data
compression strategy by jointly considering compression rate and model
accuracy; 2) A latency-aware deep decoupling strategy to minimize the overall
execution latency; and 3) An edge-cloud structure adaptation strategy that
dynamically changes the decoupling for different network conditions.
Experiments demonstrate that our solution can significantly reduce the
execution latency: it speeds up the overall inference execution with a
guaranteed model accuracy loss.Comment: conference, copyright transfered to IEE
- …