33 research outputs found

    Transversal numbers over subsets of linear spaces

    Full text link
    Let MM be a subset of Rk\mathbb{R}^k. It is an important question in the theory of linear inequalities to estimate the minimal number h=h(M)h=h(M) such that every system of linear inequalities which is infeasible over MM has a subsystem of at most hh inequalities which is already infeasible over M.M. This number h(M)h(M) is said to be the Helly number of M.M. In view of Helly's theorem, h(Rn)=n+1h(\mathbb{R}^n)=n+1 and, by the theorem due to Doignon, Bell and Scarf, h(Zd)=2d.h(\mathbb{Z}^d)=2^d. We give a common extension of these equalities showing that h(Rn×Zd)=(n+1)2d.h(\mathbb{R}^n \times \mathbb{Z}^d) = (n+1) 2^d. We show that the fractional Helly number of the space M⊆RdM \subseteq \mathbb{R}^d (with the convexity structure induced by Rd\mathbb{R}^d) is at most d+1d+1 as long as h(M)h(M) is finite. Finally we give estimates for the Radon number of mixed integer spaces

    Setting Parameters by Example

    Full text link
    We introduce a class of "inverse parametric optimization" problems, in which one is given both a parametric optimization problem and a desired optimal solution; the task is to determine parameter values that lead to the given solution. We describe algorithms for solving such problems for minimum spanning trees, shortest paths, and other "optimal subgraph" problems, and discuss applications in multicast routing, vehicle path planning, resource allocation, and board game programming.Comment: 13 pages, 3 figures. To be presented at 40th IEEE Symp. Foundations of Computer Science (FOCS '99

    An optimal randomized algorithm for d-variate zonoid depth

    Get PDF
    AbstractA randomized linear expected-time algorithm for computing the zonoid depth [R. Dyckerhoff, G. Koshevoy, K. Mosler, Zonoid data depth: Theory and computation, in: A. Prat (Ed.), COMPSTAT 1996—Proceedings in Computational Statistics, Physica-Verlag, Heidelberg, 1996, pp. 235–240; K. Mosler, Multivariate Dispersion, Central Regions and Depth. The Lift Zonoid Approach, Lecture Notes in Statistics, vol. 165, Springer-Verlag, New York, 2002] of a point with respect to a fixed dimensional point set is presented

    A randomized algorithm for large scale support vector learning

    Get PDF
    We propose a randomized algorithm for large scale SVM learning which solves the problem by iterating over random subsets of the data. Crucial to the algorithm for scalability is the size of the subsets chosen. In the context of text classification we show that, by using ideas from random projections, a sample size of O(log n) can be used to obtain a solution which is close to the optimal with a high probability. Experiments done on synthetic and real life data sets demonstrate that the algorithm scales up SVM learners, without loss in accuracy
    corecore