2,240 research outputs found
The role of asymptotic functions in network optimization and feasibility studies
Solutions to network optimization problems have greatly benefited from
developments in nonlinear analysis, and, in particular, from developments in
convex optimization. A key concept that has made convex and nonconvex analysis
an important tool in science and engineering is the notion of asymptotic
function, which is often hidden in many influential studies on nonlinear
analysis and related fields. Therefore, we can also expect that asymptotic
functions are deeply connected to many results in the wireless domain, even
though they are rarely mentioned in the wireless literature. In this study, we
show connections of this type. By doing so, we explain many properties of
centralized and distributed solutions to wireless resource allocation problems
within a unified framework, and we also generalize and unify existing
approaches to feasibility analysis of network designs. In particular, we show
sufficient and necessary conditions for mappings widely used in wireless
communication problems (more precisely, the class of standard interference
mappings) to have a fixed point. Furthermore, we derive fundamental bounds on
the utility and the energy efficiency that can be achieved by solving a large
family of max-min utility optimization problems in wireless networks.Comment: GlobalSIP 2017 (to appear
Noise-Adaptive Compiler Mappings for Noisy Intermediate-Scale Quantum Computers
A massive gap exists between current quantum computing (QC) prototypes, and
the size and scale required for many proposed QC algorithms. Current QC
implementations are prone to noise and variability which affect their
reliability, and yet with less than 80 quantum bits (qubits) total, they are
too resource-constrained to implement error correction. The term Noisy
Intermediate-Scale Quantum (NISQ) refers to these current and near-term systems
of 1000 qubits or less. Given NISQ's severe resource constraints, low
reliability, and high variability in physical characteristics such as coherence
time or error rates, it is of pressing importance to map computations onto them
in ways that use resources efficiently and maximize the likelihood of
successful runs.
This paper proposes and evaluates backend compiler approaches to map and
optimize high-level QC programs to execute with high reliability on NISQ
systems with diverse hardware characteristics. Our techniques all start from an
LLVM intermediate representation of the quantum program (such as would be
generated from high-level QC languages like Scaffold) and generate QC
executables runnable on the IBM Q public QC machine. We then use this framework
to implement and evaluate several optimal and heuristic mapping methods. These
methods vary in how they account for the availability of dynamic machine
calibration data, the relative importance of various noise parameters, the
different possible routing strategies, and the relative importance of
compile-time scalability versus runtime success. Using real-system
measurements, we show that fine grained spatial and temporal variations in
hardware parameters can be exploited to obtain an average x (and up to
x) improvement in program success rate over the industry standard IBM
Qiskit compiler.Comment: To appear in ASPLOS'1
Full-Stack, Real-System Quantum Computer Studies: Architectural Comparisons and Design Insights
In recent years, Quantum Computing (QC) has progressed to the point where
small working prototypes are available for use. Termed Noisy Intermediate-Scale
Quantum (NISQ) computers, these prototypes are too small for large benchmarks
or even for Quantum Error Correction, but they do have sufficient resources to
run small benchmarks, particularly if compiled with optimizations to make use
of scarce qubits and limited operation counts and coherence times. QC has not
yet, however, settled on a particular preferred device implementation
technology, and indeed different NISQ prototypes implement qubits with very
different physical approaches and therefore widely-varying device and machine
characteristics.
Our work performs a full-stack, benchmark-driven hardware-software analysis
of QC systems. We evaluate QC architectural possibilities, software-visible
gates, and software optimizations to tackle fundamental design questions about
gate set choices, communication topology, the factors affecting benchmark
performance and compiler optimizations. In order to answer key cross-technology
and cross-platform design questions, our work has built the first top-to-bottom
toolflow to target different qubit device technologies, including
superconducting and trapped ion qubits which are the current QC front-runners.
We use our toolflow, TriQ, to conduct {\em real-system} measurements on 7
running QC prototypes from 3 different groups, IBM, Rigetti, and University of
Maryland. From these real-system experiences at QC's hardware-software
interface, we make observations about native and software-visible gates for
different QC technologies, communication topologies, and the value of
noise-aware compilation even on lower-noise platforms. This is the largest
cross-platform real-system QC study performed thus far; its results have the
potential to inform both QC device and compiler design going forward.Comment: Preprint of a publication in ISCA 201
The Theory of Quasiconformal Mappings in Higher Dimensions, I
We present a survey of the many and various elements of the modern
higher-dimensional theory of quasiconformal mappings and their wide and varied
application. It is unified (and limited) by the theme of the author's
interests. Thus we will discuss the basic theory as it developed in the 1960s
in the early work of F.W. Gehring and Yu G. Reshetnyak and subsequently explore
the connections with geometric function theory, nonlinear partial differential
equations, differential and geometric topology and dynamics as they ensued over
the following decades. We give few proofs as we try to outline the major
results of the area and current research themes. We do not strive to present
these results in maximal generality, as to achieve this considerable technical
knowledge would be necessary of the reader. We have tried to give a feel of
where the area is, what are the central ideas and problems and where are the
major current interactions with researchers in other areas. We have also added
a bit of history here and there. We have not been able to cover the many recent
advances generalising the theory to mappings of finite distortion and to
degenerate elliptic Beltrami systems which connects the theory closely with the
calculus of variations and nonlinear elasticity, nonlinear Hodge theory and
related areas, although the reader may see shadows of this aspect in parts
Variational Approach in Wavelet Framework to Polynomial Approximations of Nonlinear Accelerator Problems
In this paper we present applications of methods from wavelet analysis to
polynomial approximations for a number of accelerator physics problems.
According to variational approach in the general case we have the solution as a
multiresolution (multiscales) expansion in the base of compactly supported
wavelet basis. We give extension of our results to the cases of periodic
orbital particle motion and arbitrary variable coefficients. Then we consider
more flexible variational method which is based on biorthogonal wavelet
approach. Also we consider different variational approach, which is applied to
each scale.Comment: LaTeX2e, aipproc.sty, 21 Page
A robust machine learning method for cell-load approximation in wireless networks
We propose a learning algorithm for cell-load approximation in wireless
networks. The proposed algorithm is robust in the sense that it is designed to
cope with the uncertainty arising from a small number of training samples. This
scenario is highly relevant in wireless networks where training has to be
performed on short time scales because of a fast time-varying communication
environment. The first part of this work studies the set of feasible rates and
shows that this set is compact. We then prove that the mapping relating a
feasible rate vector to the unique fixed point of the non-linear cell-load
mapping is monotone and uniformly continuous. Utilizing these properties, we
apply an approximation framework that achieves the best worst-case performance.
Furthermore, the approximation preserves the monotonicity and continuity
properties. Simulations show that the proposed method exhibits better
robustness and accuracy for small training sets in comparison with standard
approximation techniques for multivariate data.Comment: Shorter version accepted at ICASSP 201
- …