33 research outputs found
Transversal numbers over subsets of linear spaces
Let be a subset of . It is an important question in the
theory of linear inequalities to estimate the minimal number such that
every system of linear inequalities which is infeasible over has a
subsystem of at most inequalities which is already infeasible over
This number is said to be the Helly number of In view of Helly's
theorem, and, by the theorem due to Doignon, Bell and
Scarf, We give a common extension of these equalities
showing that We show that
the fractional Helly number of the space (with the
convexity structure induced by ) is at most as long as
is finite. Finally we give estimates for the Radon number of mixed
integer spaces
Setting Parameters by Example
We introduce a class of "inverse parametric optimization" problems, in which
one is given both a parametric optimization problem and a desired optimal
solution; the task is to determine parameter values that lead to the given
solution. We describe algorithms for solving such problems for minimum spanning
trees, shortest paths, and other "optimal subgraph" problems, and discuss
applications in multicast routing, vehicle path planning, resource allocation,
and board game programming.Comment: 13 pages, 3 figures. To be presented at 40th IEEE Symp. Foundations
of Computer Science (FOCS '99
An optimal randomized algorithm for d-variate zonoid depth
AbstractA randomized linear expected-time algorithm for computing the zonoid depth [R. Dyckerhoff, G. Koshevoy, K. Mosler, Zonoid data depth: Theory and computation, in: A. Prat (Ed.), COMPSTAT 1996—Proceedings in Computational Statistics, Physica-Verlag, Heidelberg, 1996, pp. 235–240; K. Mosler, Multivariate Dispersion, Central Regions and Depth. The Lift Zonoid Approach, Lecture Notes in Statistics, vol. 165, Springer-Verlag, New York, 2002] of a point with respect to a fixed dimensional point set is presented
A randomized algorithm for large scale support vector learning
We propose a randomized algorithm for large scale SVM learning which solves the problem by iterating over random subsets of the data. Crucial to the algorithm for scalability is the size of the subsets chosen. In the context of text classification we show that, by using ideas from random projections, a sample size of O(log n) can be used to obtain a solution which is close to the optimal with a high probability. Experiments done on synthetic and real life data sets demonstrate that the algorithm scales up SVM learners, without loss in accuracy