92 research outputs found
Complexity results for the Pilot Assignment problem in Cell-Free Massive MIMO
Wireless communication is enabling billions of people to connect to each
other and the internet, transforming every sector of the economy, and building
the foundations for powerful new technologies that hold great promise to
improve lives at an unprecedented rate and scale. The rapid increase in the
number of devices and the associated demands for higher data rates and broader
network coverage fuels the need for more robust wireless technologies. The key
technology identified to address this problem is referred to as Cell-Free
Massive MIMO (CF-mMIMO). CF-mMIMO is accompanied by many challenges, one of
which is efficiently allocating limited resources. In this paper, we focus on a
major resource allocation problem in wireless networks, namely the Pilot
Assignment problem (PA). We show that PA is strongly NP-hard and that it does
not admit a polynomial-time constant-factor approximation algorithm. Further,
we show that PA cannot be approximated in polynomial time within
(where is the number of users) when the system consists
of at least three pilots. Finally, we present an approximation lower bound of
(resp. , for ) in special cases where the
system consists of exactly two (resp. three) pilots.Comment: 20 pages, 0 figure
Deleting Edges to Restrict the Size of an Epidemic: A New Application for Treewidth
Motivated by applications in network epidemiology, we consider the problem of determining whether it is possible to delete at most k edges from a given input graph (of small treewidth) so that the maximum component size in the resulting graph is at most h. While this problem is NP-complete in general, we provide evidence that many of the real-world networks of interest are likely to have small treewidth, and we describe an algorithm which solves the problem in time O((wh)2wn) on an input graph having n vertices and whose treewidth is bounded by a fixed constant w
Harmless interpolation of noisy data in regression
A continuing mystery in understanding the empirical success of deep neural
networks has been in their ability to achieve zero training error and yet
generalize well, even when the training data is noisy and there are more
parameters than data points. We investigate this "overparametrization"
phenomena in the classical underdetermined linear regression problem, where all
solutions that minimize training error interpolate the data, including noise.
We give a bound on how well such interpolative solutions can generalize to
fresh test data, and show that this bound generically decays to zero with the
number of extra features, thus characterizing an explicit benefit of
overparameterization. For appropriately sparse linear models, we provide a
hybrid interpolating scheme (combining classical sparse recovery schemes with
harmless noise-fitting) to achieve generalization error close to the bound on
interpolative solutions.Comment: 17 pages, presented at ITA in San Diego in Feb 201
- …