26,637 research outputs found
Generalization and variations of Pellet's theorem for matrix polynomials
We derive a generalized matrix version of Pellet's theorem, itself based on a
generalized Rouch\'{e} theorem for matrix-valued functions, to generate upper,
lower, and internal bounds on the eigenvalues of matrix polynomials. Variations
of the theorem are suggested to try and overcome situations where Pellet's
theorem cannot be applied.Comment: 20 page
Joint Service Placement and Request Routing in Multi-cell Mobile Edge Computing Networks
The proliferation of innovative mobile services such as augmented reality,
networked gaming, and autonomous driving has spurred a growing need for
low-latency access to computing resources that cannot be met solely by existing
centralized cloud systems. Mobile Edge Computing (MEC) is expected to be an
effective solution to meet the demand for low-latency services by enabling the
execution of computing tasks at the network-periphery, in proximity to
end-users. While a number of recent studies have addressed the problem of
determining the execution of service tasks and the routing of user requests to
corresponding edge servers, the focus has primarily been on the efficient
utilization of computing resources, neglecting the fact that non-trivial
amounts of data need to be stored to enable service execution, and that many
emerging services exhibit asymmetric bandwidth requirements. To fill this gap,
we study the joint optimization of service placement and request routing in
MEC-enabled multi-cell networks with multidimensional
(storage-computation-communication) constraints. We show that this problem
generalizes several problems in literature and propose an algorithm that
achieves close-to-optimal performance using randomized rounding. Evaluation
results demonstrate that our approach can effectively utilize the available
resources to maximize the number of requests served by low-latency edge cloud
servers.Comment: IEEE Infocom 201
Distributed Multi-Task Relationship Learning
Multi-task learning aims to learn multiple tasks jointly by exploiting their
relatedness to improve the generalization performance for each task.
Traditionally, to perform multi-task learning, one needs to centralize data
from all the tasks to a single machine. However, in many real-world
applications, data of different tasks may be geo-distributed over different
local machines. Due to heavy communication caused by transmitting the data and
the issue of data privacy and security, it is impossible to send data of
different task to a master machine to perform multi-task learning. Therefore,
in this paper, we propose a distributed multi-task learning framework that
simultaneously learns predictive models for each task as well as task
relationships between tasks alternatingly in the parameter server paradigm. In
our framework, we first offer a general dual form for a family of regularized
multi-task relationship learning methods. Subsequently, we propose a
communication-efficient primal-dual distributed optimization algorithm to solve
the dual problem by carefully designing local subproblems to make the dual
problem decomposable. Moreover, we provide a theoretical convergence analysis
for the proposed algorithm, which is specific for distributed multi-task
relationship learning. We conduct extensive experiments on both synthetic and
real-world datasets to evaluate our proposed framework in terms of
effectiveness and convergence.Comment: To appear in KDD 201
B-urns
The fringe of a B-tree with parameter is considered as a particular
P\'olya urn with colors. More precisely, the asymptotic behaviour of this
fringe, when the number of stored keys tends to infinity, is studied through
the composition vector of the fringe nodes. We establish its typical behaviour
together with the fluctuations around it. The well known phase transition in
P\'olya urns has the following effect on B-trees: for , the
fluctuations are asymptotically Gaussian, though for , the
composition vector is oscillating; after scaling, the fluctuations of such an
urn strongly converge to a random variable . This limit is -valued and it does not seem to follow any classical law. Several properties
of are shown: existence of exponential moments, characterization of its
distribution as the solution of a smoothing equation, existence of a density
relatively to the Lebesgue measure on , support of . Moreover, a
few representations of the composition vector for various values of
illustrate the different kinds of convergence
On the Communication Complexity of Secure Computation
Information theoretically secure multi-party computation (MPC) is a central
primitive of modern cryptography. However, relatively little is known about the
communication complexity of this primitive.
In this work, we develop powerful information theoretic tools to prove lower
bounds on the communication complexity of MPC. We restrict ourselves to a
3-party setting in order to bring out the power of these tools without
introducing too many complications. Our techniques include the use of a data
processing inequality for residual information - i.e., the gap between mutual
information and G\'acs-K\"orner common information, a new information
inequality for 3-party protocols, and the idea of distribution switching by
which lower bounds computed under certain worst-case scenarios can be shown to
apply for the general case.
Using these techniques we obtain tight bounds on communication complexity by
MPC protocols for various interesting functions. In particular, we show
concrete functions that have "communication-ideal" protocols, which achieve the
minimum communication simultaneously on all links in the network. Also, we
obtain the first explicit example of a function that incurs a higher
communication cost than the input length in the secure computation model of
Feige, Kilian and Naor (1994), who had shown that such functions exist. We also
show that our communication bounds imply tight lower bounds on the amount of
randomness required by MPC protocols for many interesting functions.Comment: 37 page
Tight bounds on the convergence rate of generalized ratio consensus algorithms
The problems discussed in this paper are motivated by general ratio consensus
algorithms, introduced by Kempe, Dobra, and Gehrke (2003) in a simple form as
the push-sum algorithm, later extended by B\'en\'ezit et al. (2010) under the
name weighted gossip algorithm. We consider a communication protocol described
by a strictly stationary, ergodic, sequentially primitive sequence of
non-negative matrices, applied iteratively to a pair of fixed initial vectors,
the components of which are called values and weights defined at the nodes of a
network. The subject of ratio consensus problems is to study the asymptotic
properties of ratios of values and weights at each node, expecting convergence
to the same limit for all nodes. The main results of the paper provide upper
bounds for the rate of the almost sure exponential convergence in terms of the
spectral gap associated with the given sequence of random matrices. It will be
shown that these upper bounds are sharp. Our results complement previous
results of Picci and Taylor (2013) and Iutzeler, Ciblat and Hachem (2013)
The Approximate Capacity of the MIMO Relay Channel
Capacity bounds are studied for the multiple-antenna complex Gaussian relay
channel with t1 transmitting antennas at the sender, r2 receiving and t2
transmitting antennas at the relay, and r3 receiving antennas at the receiver.
It is shown that the partial decode-forward coding scheme achieves within
min(t1,r2) bits from the cutset bound and at least one half of the cutset
bound, establishing a good approximate expression of the capacity. A similar
additive gap of min(t1 + t2, r3) + r2 bits is shown to be achieved by the
compress-forward coding scheme.Comment: 8 pages, 5 figures, submitted to the IEEE Transactions on Information
Theor
- …