110 research outputs found

    Polyhedral computational geometry for averaging metric phylogenetic trees

    Get PDF
    This paper investigates the computational geometry relevant to calculations of the Frechet mean and variance for probability distributions on the phylogenetic tree space of Billera, Holmes and Vogtmann, using the theory of probability measures on spaces of nonpositive curvature developed by Sturm. We show that the combinatorics of geodesics with a specified fixed endpoint in tree space are determined by the location of the varying endpoint in a certain polyhedral subdivision of tree space. The variance function associated to a finite subset of tree space has a fixed CC^\infty algebraic formula within each cell of the corresponding subdivision, and is continuously differentiable in the interior of each orthant of tree space. We use this subdivision to establish two iterative methods for producing sequences that converge to the Frechet mean: one based on Sturm's Law of Large Numbers, and another based on descent algorithms for finding optima of smooth functions on convex polyhedra. We present properties and biological applications of Frechet means and extend our main results to more general globally nonpositively curved spaces composed of Euclidean orthants.Comment: 43 pages, 6 figures; v2: fixed typos, shortened Sections 1 and 5, added counter example for polyhedrality of vistal subdivision in general CAT(0) cubical complexes; v1: 43 pages, 5 figure

    Faster Algorithms for Largest Empty Rectangles and Boxes

    Get PDF
    We revisit a classical problem in computational geometry: finding the largest-volume axis-aligned empty box (inside a given bounding box) amidst nn given points in dd dimensions. Previously, the best algorithms known have running time O(nlog2n)O(n\log^2n) for d=2d=2 (by Aggarwal and Suri [SoCG'87]) and near ndn^d for d3d\ge 3. We describe faster algorithms with running time (i) O(n2O(logn)logn)O(n2^{O(\log^*n)}\log n) for d=2d=2, (ii) O(n2.5+o(1))O(n^{2.5+o(1)}) time for d=3d=3, and (iii) O~(n(5d+2)/6)\widetilde{O}(n^{(5d+2)/6}) time for any constant d4d\ge 4. To obtain the higher-dimensional result, we adapt and extend previous techniques for Klee's measure problem to optimize certain objective functions over the complement of a union of orthants.Comment: full version of a SoCG 2021 pape

    4D Dual-Tree Complex Wavelets for Time-Dependent Data

    Get PDF
    The dual-tree complex wavelet transform (DT-ℂWT) is extended to the 4D setting. Key properties of 4D DT-ℂWT, such as directional sensitivity and shift-invariance, are discussed and illustrated in a tomographic application. The inverse problem of reconstructing a dynamic three-dimensional target from X-ray projection measurements can be formulated as 4D space-time tomography. The results suggest that 4D DT-ℂWT offers simple implementations combined with useful theoretical properties for tomographic reconstruction.Peer reviewe

    What Can Transformers Learn In-Context? A Case Study of Simple Function Classes

    Full text link
    In-context learning refers to the ability of a model to condition on a prompt sequence consisting of in-context examples (input-output pairs corresponding to some task) along with a new query input, and generate the corresponding output. Crucially, in-context learning happens only at inference time without any parameter updates to the model. While large language models such as GPT-3 exhibit some ability to perform in-context learning, it is unclear what the relationship is between tasks on which this succeeds and what is present in the training data. To make progress towards understanding in-context learning, we consider the well-defined problem of training a model to in-context learn a function class (e.g., linear functions): that is, given data derived from some functions in the class, can we train a model to in-context learn "most" functions from this class? We show empirically that standard Transformers can be trained from scratch to perform in-context learning of linear functions -- that is, the trained model is able to learn unseen linear functions from in-context examples with performance comparable to the optimal least squares estimator. In fact, in-context learning is possible even under two forms of distribution shift: (i) between the training data of the model and inference-time prompts, and (ii) between the in-context examples and the query input during inference. We also show that we can train Transformers to in-context learn more complex function classes -- namely sparse linear functions, two-layer neural networks, and decision trees -- with performance that matches or exceeds task-specific learning algorithms. Our code and models are available at https://github.com/dtsip/in-context-learning
    corecore