133,692 research outputs found
Stochastic Streams: Sample Complexity vs. Space Complexity
We address the trade-off between the computational resources needed to process a large data set and the number of samples available from the data set. Specifically, we consider the following abstraction: we receive a potentially infinite stream of IID samples from some unknown distribution D, and are tasked with computing some function f(D). If the stream is observed for time t, how much memory, s, is required to estimate f(D)? We refer to t as the sample complexity and s as the space complexity. The main focus of this paper is investigating the trade-offs between the space and sample complexity. We study these trade-offs for several canonical problems studied in the data stream model: estimating the collision probability, i.e., the second moment of a distribution, deciding if a graph is connected, and approximating the dimension of an unknown subspace. Our results are based on techniques for simulating different classical sampling procedures in this model, emulating random walks given a sequence of IID samples, as well as leveraging a characterization between communication bounded protocols and statistical query algorithms
Pareto Optimal Strategies for Event Triggered Estimation
Although resource-limited networked autonomous systems must be able to
efficiently and effectively accomplish tasks, better conservation of resources
often results in worse task performance. We specifically address the problem of
finding strategies for managing measurement communication costs between agents.
A well understood technique for trading off communication costs with estimation
accuracy is event triggering (ET), where measurements are only communicated
when useful, e.g., when Kalman filter innovations exceed some threshold. In the
absence of measurements, agents can use implicit information to achieve results
almost as well as when explicit data is always communicated. However, there are
no methods for setting this threshold with formal guarantees on task
performance. We fill this gap by developing a novel belief space discretization
technique to abstract a continuous space dynamics model for ET estimation to a
discrete Markov decision process, which scalably accommodates
threshold-sensitive ET estimator error covariances. We then apply an existing
probabilistic trade-off analysis tool to find the set of all optimal trade-offs
between resource consumption and task performance. From this set, an ET
threshold selection strategy is extracted. Simulated results show our approach
identifies non-trivial trade-offs between performance and energy savings, with
only modest computational effort.Comment: 8 pages, accepted to IEEE Conference on Decision and Control 202
Information trade-offs for optical quantum communication
Recent work has precisely characterized the achievable trade-offs between
three key information processing tasks---classical communication (generation or
consumption), quantum communication (generation or consumption), and shared
entanglement (distribution or consumption), measured in bits, qubits, and ebits
per channel use, respectively. Slices and corner points of this
three-dimensional region reduce to well-known protocols for quantum channels. A
trade-off coding technique can attain any point in the region and can
outperform time-sharing between the best-known protocols for accomplishing each
information processing task by itself. Previously, the benefits of trade-off
coding that had been found were too small to be of practical value (viz., for
the dephasing and the universal cloning machine channels). In this letter, we
demonstrate that the associated performance gains are in fact remarkably high
for several physically relevant bosonic channels that model free-space /
fiber-optic links, thermal-noise channels, and amplifiers. We show that
significant performance gains from trade-off coding also apply when trading
photon-number resources between transmitting public and private classical
information simultaneously over secret-key-assisted bosonic channels.Comment: 6 pages, 2 figures, see related, longer article at arXiv:1105.011
Man-Machine Control of Space Robots under Uncertainty
Control problem of space robots is characterized by several challenges. The first one is that the area is full of uncertainties due to lack of information. Another difficulty is tasksharing between an operator and a partly autonomous robot. Moreover, there are several constrains on the robot operations, including communication delay and an appropriate temperature at which robot can work.
Design of the robots navigation should be based on consideration of trade-offs between several conflicting criteria, such as maximization of the robot safety, minimization of the energy consumption, maximization of the value of information collected by the robot during its movement.
Our research objective is to design man-machine interactive system, dealing with navigation problem of space robots. This paper focuses on the problem of path planning for small robot exploring a small asteroid. This problem is solved by an operator controlling the robot from Earth
Synchronous multi-GPU training for deep learning with low-precision communications: An empirical study
Training deep learning models has received tremendous research interest recently. In particular, there has been intensive research on reducing the communication cost of training when using multiple computational devices, through reducing the precision of the underlying data representation. Naturally, such methods induce system trade-offs—lowering communication precision could de-crease communication overheads and improve scalability; but, on the other hand, it can also reduce the accuracy of training. In this paper, we study this trade-off space, and ask:Can low-precision communication consistently improve the end-to-end performance of training modern neural networks, with no accuracy loss?From the performance point of view, the answer to this question may appear deceptively easy: compressing communication through low precision should help when the ratio between communication and computation is high. However, this answer is less straightforward when we try to generalize this principle across various neural network architectures (e.g., AlexNet vs. ResNet),number of GPUs (e.g., 2 vs. 8 GPUs), machine configurations(e.g., EC2 instances vs. NVIDIA DGX-1), communication primitives (e.g., MPI vs. NCCL), and even different GPU architectures(e.g., Kepler vs. Pascal). Currently, it is not clear how a realistic realization of all these factors maps to the speed up provided by low-precision communication. In this paper, we conduct an empirical study to answer this question and report the insights
Exploring Design Space For An Integrated Intelligent System
Understanding the trade-offs available in the design space of intelligent systems is a major unaddressed element in the study of Artificial Intelligence. In this paper we approach this problem in two ways. First, we discuss the development of our integrated robotic system in terms of its trajectory through design space. Second, we demonstrate the practical implications of architectural design decisions by using this system as an experimental platform for comparing behaviourally similar yet architecturally different systems. The results of this show that our system occupies a "sweet spot" in design space in terms of the cost of moving information between processing components
Trade-Offs in Distributed Interactive Proofs
The study of interactive proofs in the context of distributed network computing is a novel topic, recently introduced by Kol, Oshman, and Saxena [PODC 2018]. In the spirit of sequential interactive proofs theory, we study the power of distributed interactive proofs. This is achieved via a series of results establishing trade-offs between various parameters impacting the power of interactive proofs, including the number of interactions, the certificate size, the communication complexity, and the form of randomness used. Our results also connect distributed interactive proofs with the established field of distributed verification. In general, our results contribute to providing structure to the landscape of distributed interactive proofs
Multi-Head Finite Automata: Characterizations, Concepts and Open Problems
Multi-head finite automata were introduced in (Rabin, 1964) and (Rosenberg,
1966). Since that time, a vast literature on computational and descriptional
complexity issues on multi-head finite automata documenting the importance of
these devices has been developed. Although multi-head finite automata are a
simple concept, their computational behavior can be already very complex and
leads to undecidable or even non-semi-decidable problems on these devices such
as, for example, emptiness, finiteness, universality, equivalence, etc. These
strong negative results trigger the study of subclasses and alternative
characterizations of multi-head finite automata for a better understanding of
the nature of non-recursive trade-offs and, thus, the borderline between
decidable and undecidable problems. In the present paper, we tour a fragment of
this literature
- …