70,775 research outputs found

    Modeling association between DNA copy number and gene expression with constrained piecewise linear regression splines

    Get PDF
    DNA copy number and mRNA expression are widely used data types in cancer studies, which combined provide more insight than separately. Whereas in existing literature the form of the relationship between these two types of markers is fixed a priori, in this paper we model their association. We employ piecewise linear regression splines (PLRS), which combine good interpretation with sufficient flexibility to identify any plausible type of relationship. The specification of the model leads to estimation and model selection in a constrained, nonstandard setting. We provide methodology for testing the effect of DNA on mRNA and choosing the appropriate model. Furthermore, we present a novel approach to obtain reliable confidence bands for constrained PLRS, which incorporates model uncertainty. The procedures are applied to colorectal and breast cancer data. Common assumptions are found to be potentially misleading for biologically relevant genes. More flexible models may bring more insight in the interaction between the two markers.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS605 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    The noise properties of 42 millisecond pulsars from the European Pulsar Timing Array and their impact on gravitational wave searches

    Get PDF
    The sensitivity of Pulsar Timing Arrays to gravitational waves depends on the noise present in the individual pulsar timing data. Noise may be either intrinsic or extrinsic to the pulsar. Intrinsic sources of noise will include rotational instabilities, for example. Extrinsic sources of noise include contributions from physical processes which are not sufficiently well modelled, for example, dispersion and scattering effects, analysis errors and instrumental instabilities. We present the results from a noise analysis for 42 millisecond pulsars (MSPs) observed with the European Pulsar Timing Array. For characterising the low-frequency, stochastic and achromatic noise component, or "timing noise", we employ two methods, based on Bayesian and frequentist statistics. For 25 MSPs, we achieve statistically significant measurements of their timing noise parameters and find that the two methods give consistent results. For the remaining 17 MSPs, we place upper limits on the timing noise amplitude at the 95% confidence level. We additionally place an upper limit on the contribution to the pulsar noise budget from errors in the reference terrestrial time standards (below 1%), and we find evidence for a noise component which is present only in the data of one of the four used telescopes. Finally, we estimate that the timing noise of individual pulsars reduces the sensitivity of this data set to an isotropic, stochastic GW background by a factor of >9.1 and by a factor of >2.3 for continuous GWs from resolvable, inspiralling supermassive black-hole binaries with circular orbits.Comment: Accepted for publication by the Monthly Notices of the Royal Astronomical Societ

    Simulations of partially coherent focal plane imaging arrays: Fisher matrix approach to performance evaluation

    Get PDF
    Focal plane arrays of bolometers are increasingly employed in astronomy at far--infrared to millimetre wavelengths. The focal plane fields and the detectors are both partially coherent in these systems, but no account has previously been taken of the effect of partial coherence on array performance. In this paper, we use our recently developed coupled--mode theory of detection together with Fisher information matrix techniques from signal processing to characterize the behaviour of partially coherent imaging arrays. We investigate the effects of the size and coherence length of both the source and the detectors, and the packing density of the array, on the amount of information that can be extracted from observations with such arrays.Comment: 14 pages, 7 figures, submitted to MNRAS 7th March 200

    Diffusive Nested Sampling

    Get PDF
    We introduce a general Monte Carlo method based on Nested Sampling (NS), for sampling complex probability distributions and estimating the normalising constant. The method uses one or more particles, which explore a mixture of nested probability distributions, each successive distribution occupying ~e^-1 times the enclosed prior mass of the previous distribution. While NS technically requires independent generation of particles, Markov Chain Monte Carlo (MCMC) exploration fits naturally into this technique. We illustrate the new method on a test problem and find that it can achieve four times the accuracy of classic MCMC-based Nested Sampling, for the same computational effort; equivalent to a factor of 16 speedup. An additional benefit is that more samples and a more accurate evidence value can be obtained simply by continuing the run for longer, as in standard MCMC.Comment: Accepted for publication in Statistics and Computing. C++ code available at http://lindor.physics.ucsb.edu/DNes

    The NANOGrav 11 yr Data Set: Limits on Gravitational Wave Memory

    Get PDF
    The mergers of supermassive black hole binaries (SMBHBs) promise to be incredible sources of gravitational waves (GWs). While the oscillatory part of the merger gravitational waveform will be outside the frequency sensitivity range of pulsar timing arrays, the nonoscillatory GW memory effect is detectable. Further, any burst of GWs will produce GW memory, making memory a useful probe of unmodeled exotic sources and new physics. We searched the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) 11 yr data set for GW memory. This data set is sensitive to very low-frequency GWs of ~3 to 400 nHz (periods of ~11 yr–1 month). Finding no evidence for GWs, we placed limits on the strain amplitude of GW memory events during the observation period. We then used the strain upper limits to place limits on the rate of GW memory causing events. At a strain of 2.5 × 10⁻¹⁴, corresponding to the median upper limit as a function of source sky position, we set a limit on the rate of GW memory events at <0.4 yr⁻¹. That strain corresponds to an SMBHB merger with reduced mass of ηM ~ 2 × 10¹⁰ M_⊙ and inclination of ι = π/3 at a distance of 1 Gpc. As a test of our analysis, we analyzed the NANOGrav 9 yr data set as well. This analysis found an anomolous signal, which does not appear in the 11 yr data set. This signal is not a GW, and its origin remains unknown

    Sparsity-Cognizant Total Least-Squares for Perturbed Compressive Sampling

    Full text link
    Solving linear regression problems based on the total least-squares (TLS) criterion has well-documented merits in various applications, where perturbations appear both in the data vector as well as in the regression matrix. However, existing TLS approaches do not account for sparsity possibly present in the unknown vector of regression coefficients. On the other hand, sparsity is the key attribute exploited by modern compressive sampling and variable selection approaches to linear regression, which include noise in the data, but do not account for perturbations in the regression matrix. The present paper fills this gap by formulating and solving TLS optimization problems under sparsity constraints. Near-optimum and reduced-complexity suboptimum sparse (S-) TLS algorithms are developed to address the perturbed compressive sampling (and the related dictionary learning) challenge, when there is a mismatch between the true and adopted bases over which the unknown vector is sparse. The novel S-TLS schemes also allow for perturbations in the regression matrix of the least-absolute selection and shrinkage selection operator (Lasso), and endow TLS approaches with ability to cope with sparse, under-determined "errors-in-variables" models. Interesting generalizations can further exploit prior knowledge on the perturbations to obtain novel weighted and structured S-TLS solvers. Analysis and simulations demonstrate the practical impact of S-TLS in calibrating the mismatch effects of contemporary grid-based approaches to cognitive radio sensing, and robust direction-of-arrival estimation using antenna arrays.Comment: 30 pages, 10 figures, submitted to IEEE Transactions on Signal Processin

    Tiling strategies for optical follow-up of gravitational wave triggers by wide field of view telescopes

    Get PDF
    Binary neutron stars are among the most promising candidates for joint gravitational-wave and electromagnetic astronomy. The goal of this work is to investigate the strategy of using gravitational wave sky-localizations for binary neutron star systems, to search for electromagnetic counterparts using wide field of view optical telescopes. We examine various strategies of scanning the gravitational wave sky-localizations on the mock 2015-16 gravitational-wave events. We propose an optimal tiling-strategy that would ensure the most economical coverage of the gravitational wave sky-localization, while keeping in mind the realistic constrains of transient optical astronomy. Our analysis reveals that the proposed tiling strategy improves the sky-localization coverage over naive contour-covering method. The improvement is more significant for observations conducted using larger field of view telescopes, or for observations conducted over smaller confidence interval of gravitational wave sky-localization probability distribution. Next, we investigate the performance of the tiling strategy for telescope arrays and compare their performance against monolithic giant field of view telescopes. We observed that distributing the field of view of the telescopes into arrays of multiple telescopes significantly improves the coverage efficiency by as much as 50% over a single large FOV telescope in 2016 localizations while scanning around 100 sq. degrees. Finally, we studied the ability of optical counterpart detection by various types of telescopes. In Our analysis for a range of wide field-of-view telescopes we found improvement in detection upon sacrificing coverage of localization in order to achieve greater observation depth for very large field-of-view - small aperture telescopes, especially if the intrinsic brightness of the optical counterparts are weak.Comment: Accepted for publication in A&A. 10 pages, 10 figure
    corecore