19,658 research outputs found

    Bayesian Inference Analysis of Unmodelled Gravitational-Wave Transients

    Get PDF
    We report the results of an in-depth analysis of the parameter estimation capabilities of BayesWave, an algorithm for the reconstruction of gravitational-wave signals without reference to a specific signal model. Using binary black hole signals, we compare BayesWave's performance to the theoretical best achievable performance in three key areas: sky localisation accuracy, signal/noise discrimination, and waveform reconstruction accuracy. BayesWave is most effective for signals that have very compact time-frequency representations. For binaries, where the signal time-frequency volume decreases with mass, we find that BayesWave's performance reaches or approaches theoretical optimal limits for system masses above approximately 50 M_sun. For such systems BayesWave is able to localise the source on the sky as well as templated Bayesian analyses that rely on a precise signal model, and it is better than timing-only triangulation in all cases. We also show that the discrimination of signals against glitches and noise closely follow analytical predictions, and that only a small fraction of signals are discarded as glitches at a false alarm rate of 1/100 y. Finally, the match between BayesWave- reconstructed signals and injected signals is broadly consistent with first-principles estimates of the maximum possible accuracy, peaking at about 0.95 for high mass systems and decreasing for lower-mass systems. These results demonstrate the potential of unmodelled signal reconstruction techniques for gravitational-wave astronomy.Comment: 10 pages, 7 figure

    Detecting Visual Relationships with Deep Relational Networks

    Full text link
    Relationships among objects play a crucial role in image understanding. Despite the great success of deep learning techniques in recognizing individual objects, reasoning about the relationships among objects remains a challenging task. Previous methods often treat this as a classification problem, considering each type of relationship (e.g. "ride") or each distinct visual phrase (e.g. "person-ride-horse") as a category. Such approaches are faced with significant difficulties caused by the high diversity of visual appearance for each kind of relationships or the large number of distinct visual phrases. We propose an integrated framework to tackle this problem. At the heart of this framework is the Deep Relational Network, a novel formulation designed specifically for exploiting the statistical dependencies between objects and their relationships. On two large datasets, the proposed method achieves substantial improvement over state-of-the-art.Comment: To be appeared in CVPR 2017 as an oral pape

    Layered Interpretation of Street View Images

    Full text link
    We propose a layered street view model to encode both depth and semantic information on street view images for autonomous driving. Recently, stixels, stix-mantics, and tiered scene labeling methods have been proposed to model street view images. We propose a 4-layer street view model, a compact representation over the recently proposed stix-mantics model. Our layers encode semantic classes like ground, pedestrians, vehicles, buildings, and sky in addition to the depths. The only input to our algorithm is a pair of stereo images. We use a deep neural network to extract the appearance features for semantic classes. We use a simple and an efficient inference algorithm to jointly estimate both semantic classes and layered depth values. Our method outperforms other competing approaches in Daimler urban scene segmentation dataset. Our algorithm is massively parallelizable, allowing a GPU implementation with a processing speed about 9 fps.Comment: The paper will be presented in the 2015 Robotics: Science and Systems Conference (RSS

    Use of the MultiNest algorithm for gravitational wave data analysis

    Full text link
    We describe an application of the MultiNest algorithm to gravitational wave data analysis. MultiNest is a multimodal nested sampling algorithm designed to efficiently evaluate the Bayesian evidence and return posterior probability densities for likelihood surfaces containing multiple secondary modes. The algorithm employs a set of live points which are updated by partitioning the set into multiple overlapping ellipsoids and sampling uniformly from within them. This set of live points climbs up the likelihood surface through nested iso-likelihood contours and the evidence and posterior distributions can be recovered from the point set evolution. The algorithm is model-independent in the sense that the specific problem being tackled enters only through the likelihood computation, and does not change how the live point set is updated. In this paper, we consider the use of the algorithm for gravitational wave data analysis by searching a simulated LISA data set containing two non-spinning supermassive black hole binary signals. The algorithm is able to rapidly identify all the modes of the solution and recover the true parameters of the sources to high precision.Comment: 18 pages, 4 figures, submitted to Class. Quantum Grav; v2 includes various changes in light of referee's comment

    Convolutional Feature Masking for Joint Object and Stuff Segmentation

    Full text link
    The topic of semantic segmentation has witnessed considerable progress due to the powerful features learned by convolutional neural networks (CNNs). The current leading approaches for semantic segmentation exploit shape information by extracting CNN features from masked image regions. This strategy introduces artificial boundaries on the images and may impact the quality of the extracted features. Besides, the operations on the raw image domain require to compute thousands of networks on a single image, which is time-consuming. In this paper, we propose to exploit shape information via masking convolutional features. The proposal segments (e.g., super-pixels) are treated as masks on the convolutional feature maps. The CNN features of segments are directly masked out from these maps and used to train classifiers for recognition. We further propose a joint method to handle objects and "stuff" (e.g., grass, sky, water) in the same framework. State-of-the-art results are demonstrated on benchmarks of PASCAL VOC and new PASCAL-CONTEXT, with a compelling computational speed.Comment: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 201
    • …
    corecore