10,287 research outputs found

    Elliptical slice sampling

    Get PDF
    Many probabilistic models introduce strong dependencies between variables using a latent multivariate Gaussian distribution or a Gaussian process. We present a new Markov chain Monte Carlo algorithm for performing inference in models with multivariate Gaussian priors. Its key properties are: 1) it has simple, generic code applicable to many models, 2) it has no free parameters, 3) it works well for a variety of Gaussian process based models. These properties make our method ideal for use while model building, removing the need to spend time deriving and tuning updates for more complex algorithms.Comment: 8 pages, 6 figures, appearing in AISTATS 2010 (JMLR: W&CP volume 6). Differences from first submission: some minor edits in response to feedback

    Detecting Weakly Simple Polygons

    Full text link
    A closed curve in the plane is weakly simple if it is the limit (in the Fr\'echet metric) of a sequence of simple closed curves. We describe an algorithm to determine whether a closed walk of length n in a simple plane graph is weakly simple in O(n log n) time, improving an earlier O(n^3)-time algorithm of Cortese et al. [Discrete Math. 2009]. As an immediate corollary, we obtain the first efficient algorithm to determine whether an arbitrary n-vertex polygon is weakly simple; our algorithm runs in O(n^2 log n) time. We also describe algorithms that detect weak simplicity in O(n log n) time for two interesting classes of polygons. Finally, we discuss subtle errors in several previously published definitions of weak simplicity.Comment: 25 pages and 13 figures, submitted to SODA 201

    A simple construction method for sequentially tidying up 2D online freehand sketches

    Get PDF
    This paper presents a novel constructive approach to sequentially tidying up 2D online freehand sketches for further 3D interpretation in a conceptual design system. Upon receiving a sketch stroke, the system first identifies it as a 2D primitive and then automatically infers its 2D geometric constraints related to previous 2D geometry (if any). Based on recognized 2D constraints, the identified geometry will be modified accordingly to meet its constraints. The modification is realized in one or two sequent geometric constructions in consistence with its degrees of freedom. This method can produce 2D configurations without iterative procedures to solve constraint equations. It is simple and easy to use for a real-time application. Several examples are tested and discussed

    Fully coherent follow-up of continuous gravitational-wave candidates

    Get PDF
    The search for continuous gravitational waves from unknown isolated sources is computationally limited due to the enormous parameter space that needs to be covered and the weakness of the expected signals. Therefore semi-coherent search strategies have been developed and applied in distributed computing environments such as Einstein@Home, in order to narrow down the parameter space and identify interesting candidates. However, in order to optimally confirm or dismiss a candidate as a possible gravitational-wave signal, a fully-coherent follow-up using all the available data is required. We present a general method and implementation of a direct (2-stage) transition to a fully-coherent follow-up on semi-coherent candidates. This method is based on a grid-less Mesh Adaptive Direct Search (MADS) algorithm using the F-statistic. We demonstrate the detection power and computing cost of this follow-up procedure using extensive Monte-Carlo simulations on (simulated) semi-coherent candidates from a directed as well as from an all-sky search setup.Comment: 12 pages, 5 figure

    Compressive Matched-Field Processing

    Full text link
    Source localization by matched-field processing (MFP) generally involves solving a number of computationally intensive partial differential equations. This paper introduces a technique that mitigates this computational workload by "compressing" these computations. Drawing on key concepts from the recently developed field of compressed sensing, it shows how a low-dimensional proxy for the Green's function can be constructed by backpropagating a small set of random receiver vectors. Then, the source can be located by performing a number of "short" correlations between this proxy and the projection of the recorded acoustic data in the compressed space. Numerical experiments in a Pekeris ocean waveguide are presented which demonstrate that this compressed version of MFP is as effective as traditional MFP even when the compression is significant. The results are particularly promising in the broadband regime where using as few as two random backpropagations per frequency performs almost as well as the traditional broadband MFP, but with the added benefit of generic applicability. That is, the computationally intensive backpropagations may be computed offline independently from the received signals, and may be reused to locate any source within the search grid area

    QPTAS and Subexponential Algorithm for Maximum Clique on Disk Graphs

    Get PDF
    A (unit) disk graph is the intersection graph of closed (unit) disks in the plane. Almost three decades ago, an elegant polynomial-time algorithm was found for Maximum Clique on unit disk graphs [Clark, Colbourn, Johnson; Discrete Mathematics '90]. Since then, it has been an intriguing open question whether or not tractability can be extended to general disk graphs. We show the rather surprising structural result that a disjoint union of cycles is the complement of a disk graph if and only if at most one of those cycles is of odd length. From that, we derive the first QPTAS and subexponential algorithm running in time 2^{O~(n^{2/3})} for Maximum Clique on disk graphs. In stark contrast, Maximum Clique on intersection graphs of filled ellipses or filled triangles is unlikely to have such algorithms, even when the ellipses are close to unit disks. Indeed, we show that there is a constant ratio of approximation which cannot be attained even in time 2^{n^{1-epsilon}}, unless the Exponential Time Hypothesis fails
    corecore