3,465 research outputs found
A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing
Edge computing is promoted to meet increasing performance needs of
data-driven services using computational and storage resources close to the end
devices, at the edge of the current network. To achieve higher performance in
this new paradigm one has to consider how to combine the efficiency of resource
usage at all three layers of architecture: end devices, edge devices, and the
cloud. While cloud capacity is elastically extendable, end devices and edge
devices are to various degrees resource-constrained. Hence, an efficient
resource management is essential to make edge computing a reality. In this
work, we first present terminology and architectures to characterize current
works within the field of edge computing. Then, we review a wide range of
recent articles and categorize relevant aspects in terms of 4 perspectives:
resource type, resource management objective, resource location, and resource
use. This taxonomy and the ensuing analysis is used to identify some gaps in
the existing research. Among several research gaps, we found that research is
less prevalent on data, storage, and energy as a resource, and less extensive
towards the estimation, discovery and sharing objectives. As for resource
types, the most well-studied resources are computation and communication
resources. Our analysis shows that resource management at the edge requires a
deeper understanding of how methods applied at different levels and geared
towards different resource types interact. Specifically, the impact of mobility
and collaboration schemes requiring incentives are expected to be different in
edge architectures compared to the classic cloud solutions. Finally, we find
that fewer works are dedicated to the study of non-functional properties or to
quantifying the footprint of resource management techniques, including
edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless
Communications and Mobile Computing journa
The Quantum Curriculum Transformation Framework for the development of Quantum Information Science and Technology Education
The field of Quantum Information Science and Technology (QIST) is booming.
Due to this, many new educational courses and programs are needed in order to
prepare a workforce for the developing industry. Owing to its specialist
nature, teaching approaches in this field can suffer from being disconnected to
the substantial degree of science education research which aims to support the
best approaches to teaching in STEM fields. In order to connect these two
communities with a pragmatic and repeatable methodology, we have generated an
innovative approach, the Quantum Curriculum Transformation Framework (QCTF),
intended to provide a didactical perspective on the creation and transformation
of quantum technologies curricula. For this, we propose a decision tree
consisting of four steps: 1. choose a topic, 2. choose one or more targeted
skills, 3. choose a learning goal and 4. choose a teaching approach that
achieves this goal. We show how this can be done using an example curriculum
and more specifically quantum teleportation as a basic concept of quantum
communication within this curriculum. By approaching curriculum creation and
transformation in this way, educational goals and outcomes are more clearly
defined which is in the interest of the individual and the industry alike. The
framework is intended to structure the narrative of QIST teaching, and will
form a basis for further research in the didactics of QIST, as the need for
high quality education in this field continues to grow.Comment: 19+12 pages, 10 figures. S. Goorney and J. Bley contributed equally
to this wor
A Light-speed Linear Program Solver for Personalized Recommendation with Diversity Constraints
We study a structured linear program (LP) that emerges in the need of ranking
candidates or items in personalized recommender systems. Since the candidate
set is only known in real time, the LP also needs to be formed and solved in
real time. Latency and user experience are major considerations, requiring the
LP to be solved within just a few milliseconds. Although typical instances of
the problem are not very large in size, this stringent time limit appears to be
beyond the capability of most existing (commercial) LP solvers, which can take
milliseconds or more to find a solution. Thus, reliable methods that
address the real-world complication of latency become necessary. In this paper,
we propose a fast specialized LP solver for a structured problem with diversity
constraints. Our method solves the dual problem, making use of the piece-wise
affine structure of the dual objective function, with an additional screening
technique that helps reduce the dimensionality of the problem as the algorithm
progresses. Experiments reveal that our method can solve the problem within
roughly 1 millisecond, yielding a 20x improvement in speed over efficient
off-the-shelf LP solvers. This speed-up can help improve the quality of
recommendations without affecting user experience, highlighting how
optimization can provide solid orthogonal value to machine-learned recommender
systems
Noise-Adaptive Compiler Mappings for Noisy Intermediate-Scale Quantum Computers
A massive gap exists between current quantum computing (QC) prototypes, and
the size and scale required for many proposed QC algorithms. Current QC
implementations are prone to noise and variability which affect their
reliability, and yet with less than 80 quantum bits (qubits) total, they are
too resource-constrained to implement error correction. The term Noisy
Intermediate-Scale Quantum (NISQ) refers to these current and near-term systems
of 1000 qubits or less. Given NISQ's severe resource constraints, low
reliability, and high variability in physical characteristics such as coherence
time or error rates, it is of pressing importance to map computations onto them
in ways that use resources efficiently and maximize the likelihood of
successful runs.
This paper proposes and evaluates backend compiler approaches to map and
optimize high-level QC programs to execute with high reliability on NISQ
systems with diverse hardware characteristics. Our techniques all start from an
LLVM intermediate representation of the quantum program (such as would be
generated from high-level QC languages like Scaffold) and generate QC
executables runnable on the IBM Q public QC machine. We then use this framework
to implement and evaluate several optimal and heuristic mapping methods. These
methods vary in how they account for the availability of dynamic machine
calibration data, the relative importance of various noise parameters, the
different possible routing strategies, and the relative importance of
compile-time scalability versus runtime success. Using real-system
measurements, we show that fine grained spatial and temporal variations in
hardware parameters can be exploited to obtain an average x (and up to
x) improvement in program success rate over the industry standard IBM
Qiskit compiler.Comment: To appear in ASPLOS'1
- …