999 research outputs found

    Enhancing the significance of gravitational wave bursts through signal classification

    Get PDF
    The quest to observe gravitational waves challenges our ability to discriminate signals from detector noise. This issue is especially relevant for transient gravitational waves searches with a robust eyes wide open approach, the so called all- sky burst searches. Here we show how signal classification methods inspired by broad astrophysical characteristics can be implemented in all-sky burst searches preserving their generality. In our case study, we apply a multivariate analyses based on artificial neural networks to classify waves emitted in compact binary coalescences. We enhance by orders of magnitude the significance of signals belonging to this broad astrophysical class against the noise background. Alternatively, at a given level of mis-classification of noise events, we can detect about 1/4 more of the total signal population. We also show that a more general strategy of signal classification can actually be performed, by testing the ability of artificial neural networks in discriminating different signal classes. The possible impact on future observations by the LIGO-Virgo network of detectors is discussed by analysing recoloured noise from previous LIGO-Virgo data with coherent WaveBurst, one of the flagship pipelines dedicated to all-sky searches for transient gravitational waves

    Prospects for intermediate mass black hole binary searches with advanced gravitational-wave detectors

    Get PDF
    We estimated the sensitivity of the upcoming advanced, ground-based gravitational-wave observatories (the upgraded LIGO and Virgo and the KAGRA interferometers) to coalescing intermediate mass black hole binaries (IMBHB). We added waveforms modeling the gravitational radiation emitted by IMBHBs to detectors' simulated data and searched for the injected signals with the coherent WaveBurst algorithm. The tested binary's parameter space covers non-spinning IMBHBs with source-frame total masses between 50 and 1050 M\text{M}_{\odot} and mass ratios between 1/61/6 and 1\,. We found that advanced detectors could be sensitive to these systems up to a range of a few Gpc. A theoretical model was adopted to estimate the expected observation rates, yielding up to a few tens of events per year. Thus, our results indicate that advanced detectors will have a reasonable chance to collect the first direct evidence for intermediate mass black holes and open a new, intriguing channel for probing the Universe over cosmological scales.Comment: 9 pages, 4 figures, corrected the name of one author (previously misspelled

    Generating Bijections between HOAS and the Natural Numbers

    Full text link
    A provably correct bijection between higher-order abstract syntax (HOAS) and the natural numbers enables one to define a "not equals" relationship between terms and also to have an adequate encoding of sets of terms, and maps from one term family to another. Sets and maps are useful in many situations and are preferably provided in a library of some sort. I have released a map and set library for use with Twelf which can be used with any type for which a bijection to the natural numbers exists. Since creating such bijections is tedious and error-prone, I have created a "bijection generator" that generates such bijections automatically together with proofs of correctness, all in the context of Twelf.Comment: In Proceedings LFMTP 2010, arXiv:1009.218

    New results on rewrite-based satisfiability procedures

    Full text link
    Program analysis and verification require decision procedures to reason on theories of data structures. Many problems can be reduced to the satisfiability of sets of ground literals in theory T. If a sound and complete inference system for first-order logic is guaranteed to terminate on T-satisfiability problems, any theorem-proving strategy with that system and a fair search plan is a T-satisfiability procedure. We prove termination of a rewrite-based first-order engine on the theories of records, integer offsets, integer offsets modulo and lists. We give a modularity theorem stating sufficient conditions for termination on a combinations of theories, given termination on each. The above theories, as well as others, satisfy these conditions. We introduce several sets of benchmarks on these theories and their combinations, including both parametric synthetic benchmarks to test scalability, and real-world problems to test performances on huge sets of literals. We compare the rewrite-based theorem prover E with the validity checkers CVC and CVC Lite. Contrary to the folklore that a general-purpose prover cannot compete with reasoners with built-in theories, the experiments are overall favorable to the theorem prover, showing that not only the rewriting approach is elegant and conceptually simple, but has important practical implications.Comment: To appear in the ACM Transactions on Computational Logic, 49 page

    Semantic Web Tools and Decision-Making

    Get PDF
    Semantic Web technologies are intertwined with decision-making processes. In this paper the general objectives of the semantic web tools are reviewed and characterized, as well as the categories of decision support tools, in order to establish an intersection of utility and use. We also elaborate on actual and foreseen possibilities for a deeper integration, considering the actual implementation, opportunities and constraints in the decision-making context.info:eu-repo/semantics/publishedVersio

    A formally verified compiler back-end

    Get PDF
    This article describes the development and formal verification (proof of semantic preservation) of a compiler back-end from Cminor (a simple imperative intermediate language) to PowerPC assembly code, using the Coq proof assistant both for programming the compiler and for proving its correctness. Such a verified compiler is useful in the context of formal methods applied to the certification of critical software: the verification of the compiler guarantees that the safety properties proved on the source code hold for the executable compiled code as well

    Continuation-Passing C: compiling threads to events through continuations

    Get PDF
    In this paper, we introduce Continuation Passing C (CPC), a programming language for concurrent systems in which native and cooperative threads are unified and presented to the programmer as a single abstraction. The CPC compiler uses a compilation technique, based on the CPS transform, that yields efficient code and an extremely lightweight representation for contexts. We provide a proof of the correctness of our compilation scheme. We show in particular that lambda-lifting, a common compilation technique for functional languages, is also correct in an imperative language like C, under some conditions enforced by the CPC compiler. The current CPC compiler is mature enough to write substantial programs such as Hekate, a highly concurrent BitTorrent seeder. Our benchmark results show that CPC is as efficient, while using significantly less space, as the most efficient thread libraries available.Comment: Higher-Order and Symbolic Computation (2012). arXiv admin note: substantial text overlap with arXiv:1202.324

    Phenothiazine-mediated rescue of cognition in tau transgenic mice requires neuroprotection and reduced soluble tau burden

    Get PDF
    Abstract Background It has traditionally been thought that the pathological accumulation of tau in Alzheimer's disease and other tauopathies facilitates neurodegeneration, which in turn leads to cognitive impairment. However, recent evidence suggests that tau tangles are not the entity responsible for memory loss, rather it is an intermediate tau species that disrupts neuronal function. Thus, efforts to discover therapeutics for tauopathies emphasize soluble tau reductions as well as neuroprotection. Results Here, we found that neuroprotection alone caused by methylene blue (MB), the parent compound of the anti-tau phenothiaziazine drug, Rember™, was insufficient to rescue cognition in a mouse model of the human tauopathy, progressive supranuclear palsy (PSP) and fronto-temporal dementia with parkinsonism linked to chromosome 17 (FTDP17): Only when levels of soluble tau protein were concomitantly reduced by a very high concentration of MB, was cognitive improvement observed. Thus, neurodegeneration can be decoupled from tau accumulation, but phenotypic improvement is only possible when soluble tau levels are also reduced. Conclusions Neuroprotection alone is not sufficient to rescue tau-induced memory loss in a transgenic mouse model. Development of neuroprotective agents is an area of intense investigation in the tauopathy drug discovery field. This may ultimately be an unsuccessful approach if soluble toxic tau intermediates are not also reduced. Thus, MB and related compounds, despite their pleiotropic nature, may be the proverbial "magic bullet" because they not only are neuroprotective, but are also able to facilitate soluble tau clearance. Moreover, this shows that neuroprotection is possible without reducing tau levels. This indicates that there is a definitive molecular link between tau and cell death cascades that can be disrupted.http://deepblue.lib.umich.edu/bitstream/2027.42/78314/1/1750-1326-5-45.xmlhttp://deepblue.lib.umich.edu/bitstream/2027.42/78314/2/1750-1326-5-45.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/78314/3/1750-1326-5-45-S1.PDFPeer Reviewe

    Search for the Higgs boson in events with missing transverse energy and b quark jets produced in proton-antiproton collisions at s**(1/2)=1.96 TeV

    Get PDF
    We search for the standard model Higgs boson produced in association with an electroweak vector boson in events with no identified charged leptons, large imbalance in transverse momentum, and two jets where at least one contains a secondary vertex consistent with the decay of b hadrons. We use ~1 fb-1 integrated luminosity of proton-antiproton collisions at s**(1/2)=1.96 TeV recorded by the CDF II experiment at the Tevatron. We find 268 (16) single (double) b-tagged candidate events, where 248 +/- 43 (14.4 +/- 2.7) are expected from standard model background processes. We place 95% confidence level upper limits on the Higgs boson production cross section for several Higgs boson masses ranging from 110 GeV/c2 to 140 GeV/c2. For a mass of 115 GeV/c2 the observed (expected) limit is 20.4 (14.2) times the standard model prediction.Comment: 8 pages, 2 figures, submitted to Phys. Rev. Let

    Search for New Particles Leading to Z+jets Final States in ppˉp\bar{p} Collisions at s=1.96\sqrt{s}=1.96 TeV

    Get PDF
    We present the results of a search for new particles that lead to a \Z boson plus jets in ppˉp\bar{p} collisions at s=1.96\sqrt{s}=1.96 TeV using the Collider Detector at Fermilab (CDF II). A data sample with a luminosity of 1.06 \ifb\ collected using \Z boson decays to eeee and μμ\mu\mu is used. We describe a completely data-based method to predict the dominant background from standard-model \Z+jet events. This method can be similarly applied to other analyses requiring background predictions in multi-jet environments, as shown when validating the method by predicting the background from WW+jets in \ttbar production. No significant excess above the background prediction is observed, and a limit is set using a fourth generation quark model to quantify the acceptance. Assuming BR(bbZ)=100BR(b' \to b\Z) = 100% and using a leading-order calculation of the bb' cross section, bb' quark masses below 268 \gev/c^2 are excluded at 95% confidence level.Comment: To be submitted to PR
    corecore