7,195 research outputs found

    The random link approximation for the Euclidean traveling salesman problem

    Full text link
    The traveling salesman problem (TSP) consists of finding the length of the shortest closed tour visiting N ``cities''. We consider the Euclidean TSP where the cities are distributed randomly and independently in a d-dimensional unit hypercube. Working with periodic boundary conditions and inspired by a remarkable universality in the kth nearest neighbor distribution, we find for the average optimum tour length = beta_E(d) N^{1-1/d} [1+O(1/N)] with beta_E(2) = 0.7120 +- 0.0002 and beta_E(3) = 0.6979 +- 0.0002. We then derive analytical predictions for these quantities using the random link approximation, where the lengths between cities are taken as independent random variables. From the ``cavity'' equations developed by Krauth, Mezard and Parisi, we calculate the associated random link values beta_RL(d). For d=1,2,3, numerical results show that the random link approximation is a good one, with a discrepancy of less than 2.1% between beta_E(d) and beta_RL(d). For large d, we argue that the approximation is exact up to O(1/d^2) and give a conjecture for beta_E(d), in terms of a power series in 1/d, specifying both leading and subleading coefficients.Comment: 29 pages, 6 figures; formatting and typos correcte

    Preconditioning of Improved and ``Perfect'' Fermion Actions

    Get PDF
    We construct a locally-lexicographic SSOR preconditioner to accelerate the parallel iterative solution of linear systems of equations for two improved discretizations of lattice fermions: the Sheikholeslami-Wohlert scheme where a non-constant block-diagonal term is added to the Wilson fermion matrix and renormalization group improved actions which incorporate couplings beyond nearest neighbors of the lattice fermion fields. In case (i) we find the block llssor-scheme to be more effective by a factor about 2 than odd-even preconditioned solvers in terms of convergence rates, at beta=6.0. For type (ii) actions, we show that our preconditioner accelerates the iterative solution of a linear system of hypercube fermions by a factor of 3 to 4.Comment: 27 pages, Latex, 17 Figures include

    From Proximity to Utility: A Voronoi Partition of Pareto Optima

    Get PDF
    We present an extension of Voronoi diagrams where when considering which site a client is going to use, in addition to the site distances, other site attributes are also considered (for example, prices or weights). A cell in this diagram is then the locus of all clients that consider the same set of sites to be relevant. In particular, the precise site a client might use from this candidate set depends on parameters that might change between usages, and the candidate set lists all of the relevant sites. The resulting diagram is significantly more expressive than Voronoi diagrams, but naturally has the drawback that its complexity, even in the plane, might be quite high. Nevertheless, we show that if the attributes of the sites are drawn from the same distribution (note that the locations are fixed), then the expected complexity of the candidate diagram is near linear. To this end, we derive several new technical results, which are of independent interest. In particular, we provide a high-probability, asymptotically optimal bound on the number of Pareto optima points in a point set uniformly sampled from the dd-dimensional hypercube. To do so we revisit the classical backward analysis technique, both simplifying and improving relevant results in order to achieve the high-probability bounds

    A directed isoperimetric inequality with application to Bregman near neighbor lower bounds

    Full text link
    Bregman divergences DϕD_\phi are a class of divergences parametrized by a convex function ϕ\phi and include well known distance functions like 22\ell_2^2 and the Kullback-Leibler divergence. There has been extensive research on algorithms for problems like clustering and near neighbor search with respect to Bregman divergences, in all cases, the algorithms depend not just on the data size nn and dimensionality dd, but also on a structure constant μ1\mu \ge 1 that depends solely on ϕ\phi and can grow without bound independently. In this paper, we provide the first evidence that this dependence on μ\mu might be intrinsic. We focus on the problem of approximate near neighbor search for Bregman divergences. We show that under the cell probe model, any non-adaptive data structure (like locality-sensitive hashing) for cc-approximate near-neighbor search that admits rr probes must use space Ω(n1+μcr)\Omega(n^{1 + \frac{\mu}{c r}}). In contrast, for LSH under 1\ell_1 the best bound is Ω(n1+1cr)\Omega(n^{1+\frac{1}{cr}}). Our new tool is a directed variant of the standard boolean noise operator. We show that a generalization of the Bonami-Beckner hypercontractivity inequality exists "in expectation" or upon restriction to certain subsets of the Hamming cube, and that this is sufficient to prove the desired isoperimetric inequality that we use in our data structure lower bound. We also present a structural result reducing the Hamming cube to a Bregman cube. This structure allows us to obtain lower bounds for problems under Bregman divergences from their 1\ell_1 analog. In particular, we get a (weaker) lower bound for approximate near neighbor search of the form Ω(n1+1cr)\Omega(n^{1 + \frac{1}{cr}}) for an rr-query non-adaptive data structure, and new cell probe lower bounds for a number of other near neighbor questions in Bregman space.Comment: 27 page

    Privacy sets for constrained space-filling

    Get PDF
    The paper provides typology for space filling into what we call "soft" and "hard" methods along with introducing the central notion of privacy sets for dealing with the latter. A heuristic algorithm based on this notion is presented and we compare its performance on some well-known examples

    An Emulator for the Lyman-alpha Forest

    Full text link
    We present methods for interpolating between the 1-D flux power spectrum of the Lyman-α\alpha forest, as output by cosmological hydrodynamic simulations. Interpolation is necessary for cosmological parameter estimation due to the limited number of simulations possible. We construct an emulator for the Lyman-α\alpha forest flux power spectrum from 2121 small simulations using Latin hypercube sampling and Gaussian process interpolation. We show that this emulator has a typical accuracy of 1.5% and a worst-case accuracy of 4%, which compares well to the current statistical error of 3 - 5% at z<3z < 3 from BOSS DR9. We compare to the previous state of the art, quadratic polynomial interpolation. The Latin hypercube samples the entire volume of parameter space, while quadratic polynomial emulation samples only lower-dimensional subspaces. The Gaussian process provides an estimate of the emulation error and we show using test simulations that this estimate is reasonable. We construct a likelihood function and use it to show that the posterior constraints generated using the emulator are unbiased. We show that our Gaussian process emulator has lower emulation error than quadratic polynomial interpolation and thus produces tighter posterior confidence intervals, which will be essential for future Lyman-α\alpha surveys such as DESI.Comment: 28 pages, 10 figures, accepted to JCAP with minor change

    A partitioning strategy for nonuniform problems on multiprocessors

    Get PDF
    The partitioning of a problem on a domain with unequal work estimates in different subddomains is considered in a way that balances the work load across multiple processors. Such a problem arises for example in solving partial differential equations using an adaptive method that places extra grid points in certain subregions of the domain. A binary decomposition of the domain is used to partition it into rectangles requiring equal computational effort. The communication costs of mapping this partitioning onto different microprocessors: a mesh-connected array, a tree machine and a hypercube is then studied. The communication cost expressions can be used to determine the optimal depth of the above partitioning

    Geometric Measurement of Topological Susceptibility on Large Lattices

    Full text link
    The topological susceptibility of the quenched QCD vacuum is measured on large lattices for three β\beta values from 6.06.0 to 6.46.4. Charges possibly induced by O(a)O(a) dislocations are identified and shown to have little effect on the measured susceptibility. As β\beta increases, fewer such questionable charges are found. Scaling is checked by examining the ratios of the susceptibility to previously existing values of the rho mass, string tension, F-pi, and lambda-lattice.Comment: LaTeX article, 3 pages, uuencoded compressed tar file, 2 figures included as tex files using axismacros, DVIPS driver required to show figures. Talk presented by Jeffrey Grandy at Lattice 93, Dallas, Texas. Los Alamos Preprint number pendin
    corecore