1,931,974 research outputs found
Cost allocation and communication
Game Theory;Cost Accounting;business economics
On cost-effective communication network designing
How to efficiently design a communication network is a paramount task for
network designing and engineering. It is, however, not a single objective
optimization process as perceived by most previous researches, i.e., to
maximize its transmission capacity, but a multi-objective optimization process,
with lowering its cost to be another important objective. These two objectives
are often contradictive in that optimizing one objective may deteriorate the
other. After a deep investigation of the impact that network topology, node
capability scheme and routing algorithm as well as their interplays have on the
two objectives, this letter presents a systematic approach to achieve a
cost-effective design by carefully choosing the three designing aspects. Only
when routing algorithm and node capability scheme are elegantly chosen can
BA-like scale-free networks have the potential of achieving good tradeoff
between the two objectives. Random networks, on the other hand, have the
built-in character for a cost-effective design, especially when other aspects
cannot be determined beforehand.Comment: 6 pages, 4 figure
Classical communication cost of quantum steering
Quantum steering is observed when performing appropriate local measurements
on an entangled state. Here we discuss the possibility of simulating
classically this effect, using classical communication instead of entanglement.
We show that infinite communication is necessary for exactly simulating
steering for any pure entangled state, as well as for a class of mixed
entangled states. Moreover, we discuss the communication cost of steering for
general entangled states, as well as approximate simulation. Our findings
reveal striking differences between Bell nonlocality and steering, and provide
a natural way of measuring the strength of the latter.Comment: 7 pages, 1 figure. See also arXiv:1603.xxxxx for related work by S.
Nagy and T. V\'ertes
Quantum Channel Capacities Per Unit Cost
Communication over a noisy channel is often conducted in a setting in which
different input symbols to the channel incur a certain cost. For example, for
bosonic quantum channels, the cost associated with an input state is the number
of photons, which is proportional to the energy consumed. In such a setting, it
is often useful to know the maximum amount of information that can be reliably
transmitted per cost incurred. This is known as the capacity per unit cost. In
this paper, we generalize the capacity per unit cost to various communication
tasks involving a quantum channel such as classical communication,
entanglement-assisted classical communication, private communication, and
quantum communication. For each task, we define the corresponding capacity per
unit cost and derive a formula for it analogous to that of the usual capacity.
Furthermore, for the special and natural case in which there is a zero-cost
state, we obtain expressions in terms of an optimized relative entropy
involving the zero-cost state. For each communication task, we construct an
explicit pulse-position-modulation coding scheme that achieves the capacity per
unit cost. Finally, we compute capacities per unit cost for various bosonic
Gaussian channels and introduce the notion of a blocklength constraint as a
proposed solution to the long-standing issue of infinite capacities per unit
cost. This motivates the idea of a blocklength-cost duality, on which we
elaborate in depth.Comment: v3: 18 pages, 2 figure
Distributed Storage in Mobile Wireless Networks with Device-to-Device Communication
We consider the use of distributed storage (DS) to reduce the communication
cost of content delivery in wireless networks. Content is stored (cached) in a
number of mobile devices using an erasure correcting code. Users retrieve
content from other devices using device-to-device communication or from the
base station (BS), at the expense of higher communication cost. We address the
repair problem when a device storing data leaves the cell. We introduce a
repair scheduling where repair is performed periodically and derive analytical
expressions for the overall communication cost of content download and data
repair as a function of the repair interval. The derived expressions are then
used to evaluate the communication cost entailed by DS using several erasure
correcting codes. Our results show that DS can reduce the communication cost
with respect to the case where content is downloaded only from the BS, provided
that repairs are performed frequently enough. If devices storing content arrive
to the cell, the communication cost using DS is further reduced and, for large
enough arrival rate, it is always beneficial. Interestingly, we show that MDS
codes, which do not perform well for classical DS, can yield a low overall
communication cost in wireless DS.Comment: After final editing for publication in TCO
A 3D Parallel Algorithm for QR Decomposition
Interprocessor communication often dominates the runtime of large matrix
computations. We present a parallel algorithm for computing QR decompositions
whose bandwidth cost (communication volume) can be decreased at the cost of
increasing its latency cost (number of messages). By varying a parameter to
navigate the bandwidth/latency tradeoff, we can tune this algorithm for
machines with different communication costs
The Computational Cost of Asynchronous Neural Communication
Biological neural computation is inherently asynchronous due to large variations in neuronal spike timing and transmission delays. So-far, most theoretical work on neural networks assumes the synchronous setting where neurons fire simultaneously in discrete rounds. In this work we aim at understanding the barriers of asynchronous neural computation from an algorithmic perspective. We consider an extension of the widely studied model of synchronized spiking neurons [Maass, Neural Networks 97] to the asynchronous setting by taking into account edge and node delays.
- Edge Delays: We define an asynchronous model for spiking neurons in which the latency values (i.e., transmission delays) of non self-loop edges vary adversarially over time. This extends the recent work of [Hitron and Parter, ESA\u2719] in which the latency values are restricted to be fixed over time. Our first contribution is an impossibility result that implies that the assumption that self-loop edges have no delays (as assumed in Hitron and Parter) is indeed necessary. Interestingly, in real biological networks self-loop edges (a.k.a. autapse) are indeed free of delays, and the latter has been noted by neuroscientists to be crucial for network synchronization.
To capture the computational challenges in this setting, we first consider the implementation of a single NOT gate. This simple function already captures the fundamental difficulties in the asynchronous setting. Our key technical results are space and time upper and lower bounds for the NOT function, our time bounds are tight. In the spirit of the distributed synchronizers [Awerbuch and Peleg, FOCS\u2790] and following [Hitron and Parter, ESA\u2719], we then provide a general synchronizer machinery. Our construction is very modular and it is based on efficient circuit implementation of threshold gates. The complexity of our scheme is measured by the overhead in the number of neurons and the computation time, both are shown to be polynomial in the largest latency value, and the largest incoming degree ? of the original network.
- Node Delays: We introduce the study of asynchronous communication due to variations in the response rates of the neurons in the network. In real brain networks, the round duration varies between different neurons in the network. Our key result is a simulation methodology that allows one to transform the above mentioned synchronized solution under edge delays into a synchronized under node delays while incurring a small overhead w.r.t space and time
- …