539 research outputs found

    Efficient Semidefinite Branch-and-Cut for MAP-MRF Inference

    Full text link
    We propose a Branch-and-Cut (B&C) method for solving general MAP-MRF inference problems. The core of our method is a very efficient bounding procedure, which combines scalable semidefinite programming (SDP) and a cutting-plane method for seeking violated constraints. In order to further speed up the computation, several strategies have been exploited, including model reduction, warm start and removal of inactive constraints. We analyze the performance of the proposed method under different settings, and demonstrate that our method either outperforms or performs on par with state-of-the-art approaches. Especially when the connectivities are dense or when the relative magnitudes of the unary costs are low, we achieve the best reported results. Experiments show that the proposed algorithm achieves better approximation than the state-of-the-art methods within a variety of time budgets on challenging non-submodular MAP-MRF inference problems.Comment: 21 page

    Structured Sparsity: Discrete and Convex approaches

    Full text link
    Compressive sensing (CS) exploits sparsity to recover sparse or compressible signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity is also used to enhance interpretability in machine learning and statistics applications: While the ambient dimension is vast in modern data analysis problems, the relevant information therein typically resides in a much lower dimensional space. However, many solutions proposed nowadays do not leverage the true underlying structure. Recent results in CS extend the simple sparsity idea to more sophisticated {\em structured} sparsity models, which describe the interdependency between the nonzero components of a signal, allowing to increase the interpretability of the results and lead to better recovery performance. In order to better understand the impact of structured sparsity, in this chapter we analyze the connections between the discrete models and their convex relaxations, highlighting their relative advantages. We start with the general group sparse model and then elaborate on two important special cases: the dispersive and the hierarchical models. For each, we present the models in their discrete nature, discuss how to solve the ensuing discrete problems and then describe convex relaxations. We also consider more general structures as defined by set functions and present their convex proxies. Further, we discuss efficient optimization solutions for structured sparsity problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure

    Precoder Design for Physical Layer Multicasting

    Full text link
    This paper studies the instantaneous rate maximization and the weighted sum delay minimization problems over a K-user multicast channel, where multiple antennas are available at the transmitter as well as at all the receivers. Motivated by the degree of freedom optimality and the simplicity offered by linear precoding schemes, we consider the design of linear precoders using the aforementioned two criteria. We first consider the scenario wherein the linear precoder can be any complex-valued matrix subject to rank and power constraints. We propose cyclic alternating ascent based precoder design algorithms and establish their convergence to respective stationary points. Simulation results reveal that our proposed algorithms considerably outperform known competing solutions. We then consider a scenario in which the linear precoder can be formed by selecting and concatenating precoders from a given finite codebook of precoding matrices, subject to rank and power constraints. We show that under this scenario, the instantaneous rate maximization problem is equivalent to a robust submodular maximization problem which is strongly NP hard. We propose a deterministic approximation algorithm and show that it yields a bicriteria approximation. For the weighted sum delay minimization problem we propose a simple deterministic greedy algorithm, which at each step entails approximately maximizing a submodular set function subject to multiple knapsack constraints, and establish its performance guarantee.Comment: 37 pages, 8 figures, submitted to IEEE Trans. Signal Pro

    Algorithms for Approximate Minimization of the Difference Between Submodular Functions, with Applications

    Full text link
    We extend the work of Narasimhan and Bilmes [30] for minimizing set functions representable as a difference between submodular functions. Similar to [30], our new algorithms are guaranteed to monotonically reduce the objective function at every step. We empirically and theoretically show that the per-iteration cost of our algorithms is much less than [30], and our algorithms can be used to efficiently minimize a difference between submodular functions under various combinatorial constraints, a problem not previously addressed. We provide computational bounds and a hardness result on the mul- tiplicative inapproximability of minimizing the difference between submodular functions. We show, however, that it is possible to give worst-case additive bounds by providing a polynomial time computable lower-bound on the minima. Finally we show how a number of machine learning problems can be modeled as minimizing the difference between submodular functions. We experimentally show the validity of our algorithms by testing them on the problem of feature selection with submodular cost features.Comment: 17 pages, 8 figures. A shorter version of this appeared in Proc. Uncertainty in Artificial Intelligence (UAI), Catalina Islands, 201
    • …
    corecore