24 research outputs found
Weighted â„“_1 minimization for sparse recovery with prior information
In this paper we study the compressed sensing problem of recovering a sparse signal from a system of underdetermined linear equations when we have prior information about the probability of each entry of the unknown signal being nonzero. In particular, we focus on a model where the entries of the unknown vector fall into two sets, each with a different probability of being nonzero. We propose a weighted â„“_1 minimization recovery algorithm and analyze its performance using a Grassman angle approach. We compute explicitly the relationship between the system parameters (the weights, the number of measurements, the size of the two sets, the probabilities of being non-zero) so that an iid random Gaussian measurement matrix along with weighted â„“_1 minimization recovers almost all such sparse signals with overwhelming probability as the problem dimension increases. This allows us to compute the optimal weights. We also provide simulations to demonstrate the advantages of the method over conventional â„“_1 optimization
CrossWalk: Fairness-enhanced Node Representation Learning
The potential for machine learning systems to amplify social inequities and
unfairness is receiving increasing popular and academic attention. Much recent
work has focused on developing algorithmic tools to assess and mitigate such
unfairness. However, there is little work on enhancing fairness in graph
algorithms. Here, we develop a simple, effective and general method, CrossWalk,
that enhances fairness of various graph algorithms, including influence
maximization, link prediction and node classification, applied to node
embeddings. CrossWalk is applicable to any random walk based node
representation learning algorithm, such as DeepWalk and Node2Vec. The key idea
is to bias random walks to cross group boundaries, by upweighting edges which
(1) are closer to the groups' peripheries or (2) connect different groups in
the network. CrossWalk pulls nodes that are near groups' peripheries towards
their neighbors from other groups in the embedding space, while preserving the
necessary structural properties of the graph. Extensive experiments show the
effectiveness of our algorithm to enhance fairness in various graph algorithms,
including influence maximization, link prediction and node classification in
synthetic and real networks, with only a very small decrease in performance.Comment: Association for the Advancement of Artificial Intelligence (AAAI)
202
Divide-and-conquer: Approaching the capacity of the two-pair bidirectional Gaussian relay network
The capacity region of multi-pair bidirectional relay networks, in which a
relay node facilitates the communication between multiple pairs of users, is
studied. This problem is first examined in the context of the linear shift
deterministic channel model. The capacity region of this network when the relay
is operating at either full-duplex mode or half-duplex mode for arbitrary
number of pairs is characterized. It is shown that the cut-set upper-bound is
tight and the capacity region is achieved by a so called divide-and-conquer
relaying strategy. The insights gained from the deterministic network are then
used for the Gaussian bidirectional relay network. The strategy in the
deterministic channel translates to a specific superposition of lattice codes
and random Gaussian codes at the source nodes and successive interference
cancelation at the receiving nodes for the Gaussian network. The achievable
rate of this scheme with two pairs is analyzed and it is shown that for all
channel gains it achieves to within 3 bits/sec/Hz per user of the cut-set
upper-bound. Hence, the capacity region of the two-pair bidirectional Gaussian
relay network to within 3 bits/sec/Hz per user is characterized.Comment: IEEE Trans. on Information Theory, accepte
Breaking through the Thresholds: an Analysis for Iterative Reweighted â„“_1 Minimization via the Grassmann Angle Framework
It is now well understood that the â„“_1 minimization algorithm is able to recover sparse signals from incomplete measurements [2], [1], [3] and sharp recoverable sparsity thresholds have also been obtained for the â„“_1 minimization algorithm. However, even though iterative reweighted â„“_1 minimization algorithms or related algorithms have been empirically observed to boost the recoverable sparsity thresholds for certain types of signals, no rigorous theoretical results have been established to prove this fact. In this paper, we try to provide a theoretical foundation for analyzing the iterative reweighted â„“1 algorithms. In particular, we show that for a nontrivial class of signals, the iterative reweighted â„“_1 minimization can indeed deliver recoverable sparsity thresholds larger than that given in [1], [3]. Our results are based on a high-dimensional geometrical analysis (Grassmann angle analysis) of the null-space characterization for â„“_1 minimization and weighted â„“1 minimization algorithms
Analyzing Weighted â„“_1 Minimization for Sparse Recovery With Nonuniform Sparse Models
In this paper, we introduce a nonuniform sparsity model and analyze the performance of an optimized weighted â„“_1 minimization over that sparsity model. In particular, we focus on a model where the entries of the unknown vector fall into two sets, with entries of each set having a specific probability of being nonzero. We propose a weighted â„“_1 minimization recovery algorithm and analyze its performance using a Grassmann angle approach. We compute explicitly the relationship between the system parameters-the weights, the number of measurements, the size of the two sets, the probabilities of being nonzero-so that when i.i.d. random Gaussian measurement matrices are used, the weighted â„“_1 minimization recovers a randomly selected signal drawn from the considered sparsity model with overwhelming probability as the problem dimension increases. This allows us to compute the optimal weights. We demonstrate through rigorous analysis and simulations that for the case when the support of the signal can be divided into two different subclasses with unequal sparsity fractions, the weighted â„“_1 minimization outperforms the regular â„“_1 minimization substantially. We also generalize our results to signal vectors with an arbitrary number of subclasses for sparsity
Improved sparse recovery thresholds with two-step reweighted â„“_1 minimization
It is well known that ℓ_1 minimization can be used to recover sufficiently sparse unknown signals from compressed linear measurements. In fact, exact thresholds on the sparsity, as a function of the ratio between the system dimensions, so that with high probability almost all sparse signals can be recovered from iid Gaussian measurements, have been computed and are referred to as “weak thresholds” [4]. In this paper, we introduce a reweighted ℓ_1 recovery algorithm composed of two steps: a standard ℓ_1 minimization step to identify a set of entries where the signal is likely to reside, and a weighted ℓ_1 minimization step where entries outside this set are penalized. For signals where the non-sparse component has iid Gaussian entries, we prove a “strict” improvement in the weak recovery threshold. Simulations suggest that the improvement can be quite impressive—over 20% in the example we consider
Improving the Thresholds of Sparse Recovery: An Analysis of a Two-Step Reweighted Basis Pursuit Algorithm
It is well known that ℓ_1 minimization can be used to recover sufficiently sparse unknown signals from compressed linear measurements. Exact thresholds on the sparsity, as a function of the ratio between the system dimensions, so that with high probability almost all sparse signals can be recovered from independent identically distributed (i.i.d.) Gaussian measurements, have been computed and are referred to as weak thresholds. In this paper, we introduce a reweighted ℓ_1 recovery algorithm composed of two steps: 1) a standard ℓ_1 minimization step to identify a set of entries where the signal is likely to reside and 2) a weighted ℓ_1 minimization step where entries outside this set are penalized. For signals where the non-sparse component entries are independent and identically drawn from certain classes of distributions, (including most well-known continuous distributions), we prove a strict improvement in the weak recovery threshold. Our analysis suggests that the level of improvement in the weak threshold depends on the behavior of the distribution at the origin. Numerical simulations verify the distribution dependence of the threshold improvement very well, and suggest that in the case of i.i.d. Gaussian nonzero entries, the improvement can be quite impressive—over 20% in the example we consider
Breaking the â„“_1 recovery thresholds with reweighted â„“_1 optimization
It is now well understood that â„“_1 minimization algorithm is able to recover sparse signals from incomplete measurements and sharp recoverable sparsity thresholds have also been obtained for the l1 minimization algorithm. In this paper, we investigate a new iterative reweighted â„“_1 minimization algorithm and showed that the new algorithm can increase the sparsity recovery threshold of â„“_1 minimization when decoding signals from relevant distributions. Interestingly, we observed that the recovery threshold performance of the new algorithm depends on the behavior, more specifically the derivatives, of the signal amplitude probability distribution at the origin