3,222 research outputs found

    A method for dense packing discovery

    Full text link
    The problem of packing a system of particles as densely as possible is foundational in the field of discrete geometry and is a powerful model in the material and biological sciences. As packing problems retreat from the reach of solution by analytic constructions, the importance of an efficient numerical method for conducting \textit{de novo} (from-scratch) searches for dense packings becomes crucial. In this paper, we use the \textit{divide and concur} framework to develop a general search method for the solution of periodic constraint problems, and we apply it to the discovery of dense periodic packings. An important feature of the method is the integration of the unit cell parameters with the other packing variables in the definition of the configuration space. The method we present led to improvements in the densest-known tetrahedron packing which are reported in [arXiv:0910.5226]. Here, we use the method to reproduce the densest known lattice sphere packings and the best known lattice kissing arrangements in up to 14 and 11 dimensions respectively (the first such numerical evidence for their optimality in some of these dimensions). For non-spherical particles, we report a new dense packing of regular four-dimensional simplices with density Ļ•=128/219ā‰ˆ0.5845\phi=128/219\approx0.5845 and with a similar structure to the densest known tetrahedron packing.Comment: 15 pages, 5 figure

    Parametric Alignment of Drosophila Genomes

    Get PDF
    The classic algorithms of Needleman--Wunsch and Smith--Waterman find a maximum a posteriori probability alignment for a pair hidden Markov model (PHMM). In order to process large genomes that have undergone complex genome rearrangements, almost all existing whole genome alignment methods apply fast heuristics to divide genomes into small pieces which are suitable for Needleman--Wunsch alignment. In these alignment methods, it is standard practice to fix the parameters and to produce a single alignment for subsequent analysis by biologists. Our main result is the construction of a whole genome parametric alignment of Drosophila melanogaster and Drosophila pseudoobscura. Parametric alignment resolves the issue of robustness to changes in parameters by finding all optimal alignments for all possible parameters in a PHMM. Our alignment draws on existing heuristics for dividing whole genomes into small pieces for alignment, and it relies on advances we have made in computing convex polytopes that allow us to parametrically align non-coding regions using biologically realistic models. We demonstrate the utility of our parametric alignment for biological inference by showing that cis-regulatory elements are more conserved between Drosophila melanogaster and Drosophila pseudoobscura than previously thought. We also show how whole genome parametric alignment can be used to quantitatively assess the dependence of branch length estimates on alignment parameters. The alignment polytopes, software, and supplementary material can be downloaded at http://bio.math.berkeley.edu/parametric/.Comment: 19 pages, 3 figure

    Assessing the robustness of parsimonious predictions for gene neighborhoods from reconciled phylogenies

    Get PDF
    The availability of a large number of assembled genomes opens the way to study the evolution of syntenic character within a phylogenetic context. The DeCo algorithm, recently introduced by B{\'e}rard et al. allows the computation of parsimonious evolutionary scenarios for gene adjacencies, from pairs of reconciled gene trees. Following the approach pioneered by Sturmfels and Pachter, we describe how to modify the DeCo dynamic programming algorithm to identify classes of cost schemes that generates similar parsimonious evolutionary scenarios for gene adjacencies, as well as the robustness to changes to the cost scheme of evolutionary events of the presence or absence of specific ancestral gene adjacencies. We apply our method to six thousands mammalian gene families, and show that computing the robustness to changes to cost schemes provides new and interesting insights on the evolution of gene adjacencies and the DeCo model.Comment: Accepted, to appear in ISBRA - 11th International Symposium on Bioinformatics Research and Applications - 2015, Jun 2015, Norfolk, Virginia, United State

    Secret Sharing Schemes with a large number of players from Toric Varieties

    Full text link
    A general theory for constructing linear secret sharing schemes over a finite field \Fq from toric varieties is introduced. The number of players can be as large as (qāˆ’1)rāˆ’1(q-1)^r-1 for rā‰„1r\geq 1. We present general methods for obtaining the reconstruction and privacy thresholds as well as conditions for multiplication on the associated secret sharing schemes. In particular we apply the method on certain toric surfaces. The main results are ideal linear secret sharing schemes where the number of players can be as large as (qāˆ’1)2āˆ’1(q-1)^2-1. We determine bounds for the reconstruction and privacy thresholds and conditions for strong multiplication using the cohomology and the intersection theory on toric surfaces.Comment: 15 pages, 4 figures. arXiv admin note: text overlap with arXiv:1203.454

    Polynomial-Sized Topological Approximations Using The Permutahedron

    Get PDF
    Classical methods to model topological properties of point clouds, such as the Vietoris-Rips complex, suffer from the combinatorial explosion of complex sizes. We propose a novel technique to approximate a multi-scale filtration of the Rips complex with improved bounds for size: precisely, for nn points in Rd\mathbb{R}^d, we obtain a O(d)O(d)-approximation with at most n2O(dlogā”k)n2^{O(d \log k)} simplices of dimension kk or lower. In conjunction with dimension reduction techniques, our approach yields a O(polylog(n))O(\mathrm{polylog} (n))-approximation of size nO(1)n^{O(1)} for Rips filtrations on arbitrary metric spaces. This result stems from high-dimensional lattice geometry and exploits properties of the permutahedral lattice, a well-studied structure in discrete geometry. Building on the same geometric concept, we also present a lower bound result on the size of an approximate filtration: we construct a point set for which every (1+Ļµ)(1+\epsilon)-approximation of the \v{C}ech filtration has to contain nĪ©(logā”logā”n)n^{\Omega(\log\log n)} features, provided that Ļµ<1logā”1+cn\epsilon <\frac{1}{\log^{1+c} n} for cāˆˆ(0,1)c\in(0,1).Comment: 24 pages, 1 figur

    Experimental study of energy-minimizing point configurations on spheres

    Full text link
    In this paper we report on massive computer experiments aimed at finding spherical point configurations that minimize potential energy. We present experimental evidence for two new universal optima (consisting of 40 points in 10 dimensions and 64 points in 14 dimensions), as well as evidence that there are no others with at most 64 points. We also describe several other new polytopes, and we present new geometrical descriptions of some of the known universal optima.Comment: 41 pages, 12 figures, to appear in Experimental Mathematic

    Smoothing Method for Approximate Extensive-Form Perfect Equilibrium

    Full text link
    Nash equilibrium is a popular solution concept for solving imperfect-information games in practice. However, it has a major drawback: it does not preclude suboptimal play in branches of the game tree that are not reached in equilibrium. Equilibrium refinements can mend this issue, but have experienced little practical adoption. This is largely due to a lack of scalable algorithms. Sparse iterative methods, in particular first-order methods, are known to be among the most effective algorithms for computing Nash equilibria in large-scale two-player zero-sum extensive-form games. In this paper, we provide, to our knowledge, the first extension of these methods to equilibrium refinements. We develop a smoothing approach for behavioral perturbations of the convex polytope that encompasses the strategy spaces of players in an extensive-form game. This enables one to compute an approximate variant of extensive-form perfect equilibria. Experiments show that our smoothing approach leads to solutions with dramatically stronger strategies at information sets that are reached with low probability in approximate Nash equilibria, while retaining the overall convergence rate associated with fast algorithms for Nash equilibrium. This has benefits both in approximate equilibrium finding (such approximation is necessary in practice in large games) where some probabilities are low while possibly heading toward zero in the limit, and exact equilibrium computation where the low probabilities are actually zero.Comment: Published at IJCAI 1
    • ā€¦
    corecore