1,410 research outputs found

    On the Success Probability of the Box-Constrained Rounding and Babai Detectors

    Full text link
    In communications, one frequently needs to detect a parameter vector \hbx in a box from a linear model. The box-constrained rounding detector \x^\sBR and Babai detector \x^\sBB are often used to detect \hbx due to their high probability of correct detection, which is referred to as success probability, and their high efficiency of implimentation. It is generally believed that the success probability P^\sBR of \x^\sBR is not larger than the success probability P^\sBB of \x^\sBB. In this paper, we first present formulas for P^\sBR and P^\sBB for two different situations: \hbx is deterministic and \hbx is uniformly distributed over the constraint box. Then, we give a simple example to show that P^\sBR may be strictly larger than P^\sBB if \hbx is deterministic, while we rigorously show that P^\sBR\leq P^\sBB always holds if \hbx is uniformly distributed over the constraint box.Comment: to appear in ISIT 201

    Least Squares Ranking on Graphs

    Full text link
    Given a set of alternatives to be ranked, and some pairwise comparison data, ranking is a least squares computation on a graph. The vertices are the alternatives, and the edge values comprise the comparison data. The basic idea is very simple and old: come up with values on vertices such that their differences match the given edge data. Since an exact match will usually be impossible, one settles for matching in a least squares sense. This formulation was first described by Leake in 1976 for rankingfootball teams and appears as an example in Professor Gilbert Strang's classic linear algebra textbook. If one is willing to look into the residual a little further, then the problem really comes alive, as shown effectively by the remarkable recent paper of Jiang et al. With or without this twist, the humble least squares problem on graphs has far-reaching connections with many current areas ofresearch. These connections are to theoretical computer science (spectral graph theory, and multilevel methods for graph Laplacian systems); numerical analysis (algebraic multigrid, and finite element exterior calculus); other mathematics (Hodge decomposition, and random clique complexes); and applications (arbitrage, and ranking of sports teams). Not all of these connections are explored in this paper, but many are. The underlying ideas are easy to explain, requiring only the four fundamental subspaces from elementary linear algebra. One of our aims is to explain these basic ideas and connections, to get researchers in many fields interested in this topic. Another aim is to use our numerical experiments for guidance on selecting methods and exposing the need for further development.Comment: Added missing references, comparison of linear solvers overhauled, conclusion section added, some new figures adde

    Forward-backward truncated Newton methods for convex composite optimization

    Full text link
    This paper proposes two proximal Newton-CG methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a a reformulation of the original nonsmooth problem as the unconstrained minimization of a continuously differentiable function, namely the forward-backward envelope (FBE). The first algorithm is based on a standard line search strategy, whereas the second one combines the global efficiency estimates of the corresponding first-order methods, while achieving fast asymptotic convergence rates. Furthermore, they are computationally attractive since each Newton iteration requires the approximate solution of a linear system of usually small dimension

    Conic Optimization Theory: Convexification Techniques and Numerical Algorithms

    Full text link
    Optimization is at the core of control theory and appears in several areas of this field, such as optimal control, distributed control, system identification, robust control, state estimation, model predictive control and dynamic programming. The recent advances in various topics of modern optimization have also been revamping the area of machine learning. Motivated by the crucial role of optimization theory in the design, analysis, control and operation of real-world systems, this tutorial paper offers a detailed overview of some major advances in this area, namely conic optimization and its emerging applications. First, we discuss the importance of conic optimization in different areas. Then, we explain seminal results on the design of hierarchies of convex relaxations for a wide range of nonconvex problems. Finally, we study different numerical algorithms for large-scale conic optimization problems.Comment: 18 page
    • …
    corecore