18,726 research outputs found

    A stochastic template placement algorithm for gravitational wave data analysis

    Get PDF
    This paper presents an algorithm for constructing matched-filter template banks in an arbitrary parameter space. The method places templates at random, then removes those which are "too close" together. The properties and optimality of stochastic template banks generated in this manner are investigated for some simple models. The effectiveness of these template banks for gravitational wave searches for binary inspiral waveforms is also examined. The properties of a stochastic template bank are then compared to the deterministically placed template banks that are currently used in gravitational wave data analysis.Comment: 14 pages, 11 figure

    Practical Methods for Continuous Gravitational Wave Detection using Pulsar Timing Data

    Get PDF
    Gravitational Waves (GWs) are tiny ripples in the fabric of space-time predicted by Einstein's General Relativity. Pulsar timing arrays (PTAs) are well poised to detect low frequency (10910^{-9} -- 10710^{-7} Hz) GWs in the near future. There has been a significant amount of research into the detection of a stochastic background of GWs from supermassive black hole binaries (SMBHBs). Recent work has shown that single continuous sources standing out above the background may be detectable by PTAs operating at a sensitivity sufficient to detect the stochastic background. The most likely sources of continuous GWs in the pulsar timing frequency band are extremely massive and/or nearby SMBHBs. In this paper we present detection strategies including various forms of matched filtering and power spectral summing. We determine the efficacy and computational cost of such strategies. It is shown that it is computationally infeasible to use an optimal matched filter including the poorly constrained pulsar distances with a grid based method. We show that an Earth-term-matched filter constructed using only the correlated signal terms is both computationally viable and highly sensitive to GW signals. This technique is only a factor of two less sensitive than the computationally unrealizable optimal matched filter and a factor of two more sensitive than a power spectral summing technique. We further show that a pairwise matched filter, taking the pulsar distances into account is comparable to the optimal matched filter for the single template case and comparable to the Earth-term-matched filter for many search templates. Finally, using simulated data optimal quality, we place a theoretical minimum detectable strain amplitude of h>2×1015h>2\times 10^{-15} from continuous GWs at frequencies on the order 1/Tobs\sim1/T_{\rm obs}.Comment: submitted to Ap

    Reducing the number of templates for aligned-spin compact binary coalescence gravitational wave searches using metric-agnostic template nudging

    Full text link
    Efficient multi-dimensional template placement is crucial in computationally intensive matched-filtering searches for Gravitational Waves (GWs). Here, we implement the Neighboring Cell Algorithm (NCA) to improve the detection volume of an existing Compact Binary Coalescence (CBC) template bank. This algorithm has already been successfully applied for a binary millisecond pulsar search in data from the Fermi satellite. It repositions templates from over-dense regions to under-dense regions and reduces the number of templates that would have been required by a stochastic method to achieve the same detection volume. Our method is readily generalizable to other CBC parameter spaces. Here we apply this method to the aligned--single-spin neutron-star--black-hole binary coalescence inspiral-merger-ringdown gravitational wave parameter space. We show that the template nudging algorithm can attain the equivalent effectualness of the stochastic method with 12% fewer templates

    Expanded Parts Model for Semantic Description of Humans in Still Images

    Get PDF
    We introduce an Expanded Parts Model (EPM) for recognizing human attributes (e.g. young, short hair, wearing suit) and actions (e.g. running, jumping) in still images. An EPM is a collection of part templates which are learnt discriminatively to explain specific scale-space regions in the images (in human centric coordinates). This is in contrast to current models which consist of a relatively few (i.e. a mixture of) 'average' templates. EPM uses only a subset of the parts to score an image and scores the image sparsely in space, i.e. it ignores redundant and random background in an image. To learn our model, we propose an algorithm which automatically mines parts and learns corresponding discriminative templates together with their respective locations from a large number of candidate parts. We validate our method on three recent challenging datasets of human attributes and actions. We obtain convincing qualitative and state-of-the-art quantitative results on the three datasets.Comment: Accepted for publication in IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI

    LOMo: Latent Ordinal Model for Facial Analysis in Videos

    Full text link
    We study the problem of facial analysis in videos. We propose a novel weakly supervised learning method that models the video event (expression, pain etc.) as a sequence of automatically mined, discriminative sub-events (eg. onset and offset phase for smile, brow lower and cheek raise for pain). The proposed model is inspired by the recent works on Multiple Instance Learning and latent SVM/HCRF- it extends such frameworks to model the ordinal or temporal aspect in the videos, approximately. We obtain consistent improvements over relevant competitive baselines on four challenging and publicly available video based facial analysis datasets for prediction of expression, clinical pain and intent in dyadic conversations. In combination with complimentary features, we report state-of-the-art results on these datasets.Comment: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR

    Learning Dynamic Feature Selection for Fast Sequential Prediction

    Full text link
    We present paired learning and inference algorithms for significantly reducing computation and increasing speed of the vector dot products in the classifiers that are at the heart of many NLP components. This is accomplished by partitioning the features into a sequence of templates which are ordered such that high confidence can often be reached using only a small fraction of all features. Parameter estimation is arranged to maximize accuracy and early confidence in this sequence. Our approach is simpler and better suited to NLP than other related cascade methods. We present experiments in left-to-right part-of-speech tagging, named entity recognition, and transition-based dependency parsing. On the typical benchmarking datasets we can preserve POS tagging accuracy above 97% and parsing LAS above 88.5% both with over a five-fold reduction in run-time, and NER F1 above 88 with more than 2x increase in speed.Comment: Appears in The 53rd Annual Meeting of the Association for Computational Linguistics, Beijing, China, July 201

    Selfishness versus functional cooperation in a stochastic protocell model

    Get PDF
    How to design an "evolvable" artificial system capable to increase in complexity? Although Darwin's theory of evolution by natural selection obviously offers a firm foundation, little hope of success seems to be expected from the explanatory adequacy of modern evolutionary theory, which does a good job at explaining what has already happened but remains practically helpless at predicting what will occur. However, the study of the major transitions in evolution clearly suggests that increases in complexity have occurred on those occasions when the conflicting interests between competing individuals were partly subjugated. This immediately raises the issue about "levels of selection" in evolutionary biology, and the idea that multi-level selection scenarios are required for complexity to emerge. After analyzing the dynamical behaviour of competing replicators within compartments, we show here that a proliferation of differentiated catalysts and/or improvement of catalytic efficiency of ribozymes can potentially evolve in properly designed artificial cells. Experimental evolution in these systems will likely stand as beautiful examples of artificial adaptive systems, and will provide new insights to understand possible evolutionary paths to the evolution of metabolic complexity
    corecore