421 research outputs found
Algoritmos de aproximação para problemas de alocação de instalações e outros problemas de cadeia de fornecimento
Orientadores: Flávio Keidi Miyazawa, Maxim SviridenkoTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: O resumo poderá ser visualizado no texto completo da tese digitalAbstract: The abstract is available with the full electronic documentDoutoradoCiência da ComputaçãoDoutor em Ciência da Computaçã
Greedy vector quantization
We investigate the greedy version of the -optimal vector quantization
problem for an -valued random vector . We show the
existence of a sequence such that minimizes
(-mean quantization error at level induced by
). We show that this sequence produces -rate
optimal -tuples ( the -mean
quantization error at level induced by goes to at rate
). Greedy optimal sequences also satisfy, under natural
additional assumptions, the distortion mismatch property: the -tuples
remain rate optimal with respect to the -norms, .
Finally, we propose optimization methods to compute greedy sequences, adapted
from usual Lloyd's I and Competitive Learning Vector Quantization procedures,
either in their deterministic (implementable when ) or stochastic
versions.Comment: 31 pages, 4 figures, few typos corrected (now an extended version of
an eponym paper to appear in Journal of Approximation
Approximation with Random Bases: Pro et Contra
In this work we discuss the problem of selecting suitable approximators from
families of parameterized elementary functions that are known to be dense in a
Hilbert space of functions. We consider and analyze published procedures, both
randomized and deterministic, for selecting elements from these families that
have been shown to ensure the rate of convergence in norm of order
, where is the number of elements. We show that both randomized and
deterministic procedures are successful if additional information about the
families of functions to be approximated is provided. In the absence of such
additional information one may observe exponential growth of the number of
terms needed to approximate the function and/or extreme sensitivity of the
outcome of the approximation to parameters. Implications of our analysis for
applications of neural networks in modeling and control are illustrated with
examples.Comment: arXiv admin note: text overlap with arXiv:0905.067
Constant Approximation for -Median and -Means with Outliers via Iterative Rounding
In this paper, we present a new iterative rounding framework for many
clustering problems. Using this, we obtain an -approximation algorithm for -median with outliers, greatly
improving upon the large implicit constant approximation ratio of Chen [Chen,
SODA 2018]. For -means with outliers, we give an -approximation, which is the first -approximation for
this problem. The iterative algorithm framework is very versatile; we show how
it can be used to give - and -approximation
algorithms for matroid and knapsack median problems respectively, improving
upon the previous best approximations ratios of [Swamy, ACM Trans.
Algorithms] and [Byrka et al, ESA 2015].
The natural LP relaxation for the -median/-means with outliers problem
has an unbounded integrality gap. In spite of this negative result, our
iterative rounding framework shows that we can round an LP solution to an
almost-integral solution of small cost, in which we have at most two
fractionally open facilities. Thus, the LP integrality gap arises due to the
gap between almost-integral and fully-integral solutions. Then, using a
pre-processing procedure, we show how to convert an almost-integral solution to
a fully-integral solution losing only a constant-factor in the approximation
ratio. By further using a sparsification technique, the additive factor loss
incurred by the conversion can be reduced to any
- …