44 research outputs found

    Integrality, complexity and colourings in polyhedral combinatorics

    Get PDF

    Relaxations of the Maximum Flow Minimum Cut Property for Ideal Clutters

    Get PDF
    Given a family of sets, a covering problem consists of finding a minimum cost collection of elements that hits every set. This objective can always be bound by the maximum number of disjoint sets in the family, we refer to this as the covering dual, since when we allow covers to be fractional and relax the notion of disjoint sets, the natural Linear Programming (LP) formulations become duals and the optimal objective values of the two LPs match. A consequence of the Edmonds-Giles theorem about Totally Dual Integral systems is that if we assume the covering dual always has an optimal integer solution for every cost function, then we can always find an optimal integer cover. The converse does not hold in general, but a still standing conjecture from the mid-1970s states that the existence of an optimal integer cover for every cost function implies the existence of a 1/4-integer optimal solution to the dual for every cost function. In this thesis we discuss weaker versions of the conjecture and build tools allowing us to approach them

    Testing additive integrality gaps

    Get PDF
    We consider the problem of testing whether the maximum integrality gap of a family of integer programs in standard form is bounded by a given constant. This can be viewed as a generalization of the integer rounding property, which can be tested in polynomial time if the number of constraints is ïŹxed. It turns out that this generalization is NP-hard even if the number of constraints is ïŹxed. However, if, in addition, the objective is the all-one vector, then one can test in polynomial time whether the additive gap is bounded by a constant

    ATS-4 study program, volume 3 Final report

    Get PDF
    Parabolic reflector design and fabrication, and thermal and structural dynamic analyses for Applications Technology Satellite-

    Testing additive integrality gaps

    Get PDF
    We consider the problem of testing whether the maximum additive integrality gap of a family of integer programs in standard form is bounded by a given constant. This can be viewed as a generalization of the integer rounding property, which can be tested in polynomial time if the number of constraints is fixed. It turns out that this generalization is NP-hard even if the number of constraints is fixed. However, if, in addition, the objective is the all-one vector, then one can test in polynomial time whether the additive gap is bounded by a constant

    Testing additive integrality gaps

    Get PDF
    We consider the problem of testing whether the maximum additive integrality gap of a family of integer programs in standard form is bounded by a given constant. This can be viewed as a generalization of the integer rounding property, which can be tested in polynomial time if the number of constraints is fixed. It turns out that this generalization is NP-hard even if the number of constraints is fixed. However, if, in addition, the objective is the all-one vector, then one can test in polynomial time whether the additive gap is bounded by a constan

    Space programs summary no. 37-27, volume IV for the period April 1, 1964 to May 31, 1964. Supporting research and advanced development

    Get PDF
    Space exploration programs - systems analysis - spacecraft power and guidance systems - propellant engineering and communications system

    The Role of Riemannian Manifolds in Computer Vision: From Coding to Deep Metric Learning

    Get PDF
    A diverse number of tasks in computer vision and machine learning enjoy from representations of data that are compact yet discriminative, informative and robust to critical measurements. Two notable representations are offered by Region Covariance Descriptors (RCovD) and linear subspaces which are naturally analyzed through the manifold of Symmetric Positive Definite (SPD) matrices and the Grassmann manifold, respectively, two widely used types of Riemannian manifolds in computer vision. As our first objective, we examine image and video-based recognition applications where the local descriptors have the aforementioned Riemannian structures, namely the SPD or linear subspace structure. Initially, we provide a solution to compute Riemannian version of the conventional Vector of Locally aggregated Descriptors (VLAD), using geodesic distance of the underlying manifold as the nearness measure. Next, by having a closer look at the resulting codes, we formulate a new concept which we name Local Difference Vectors (LDV). LDVs enable us to elegantly expand our Riemannian coding techniques to any arbitrary metric as well as provide intrinsic solutions to Riemannian sparse coding and its variants when local structured descriptors are considered. We then turn our attention to two special types of covariance descriptors namely infinite-dimensional RCovDs and rank-deficient covariance matrices for which the underlying Riemannian structure, i.e. the manifold of SPD matrices is out of reach to great extent. %Generally speaking, infinite-dimensional RCovDs offer better discriminatory power over their low-dimensional counterparts. To overcome this difficulty, we propose to approximate the infinite-dimensional RCovDs by making use of two feature mappings, namely random Fourier features and the Nystrom method. As for the rank-deficient covariance matrices, unlike most existing approaches that employ inference tools by predefined regularizers, we derive positive definite kernels that can be decomposed into the kernels on the cone of SPD matrices and kernels on the Grassmann manifolds and show their effectiveness for image set classification task. Furthermore, inspired by attractive properties of Riemannian optimization techniques, we extend the recently introduced Keep It Simple and Straightforward MEtric learning (KISSME) method to the scenarios where input data is non-linearly distributed. To this end, we make use of the infinite dimensional covariance matrices and propose techniques towards projecting on the positive cone in a Reproducing Kernel Hilbert Space (RKHS). We also address the sensitivity issue of the KISSME to the input dimensionality. The KISSME algorithm is greatly dependent on Principal Component Analysis (PCA) as a preprocessing step which can lead to difficulties, especially when the dimensionality is not meticulously set. To address this issue, based on the KISSME algorithm, we develop a Riemannian framework to jointly learn a mapping performing dimensionality reduction and a metric in the induced space. Lastly, in line with the recent trend in metric learning, we devise end-to-end learning of a generic deep network for metric learning using our derivation
    corecore