8 research outputs found

    CMBPol Mission Concept Study: Prospects for polarized foreground removal

    Get PDF
    In this report we discuss the impact of polarized foregrounds on a future CMBPol satellite mission. We review our current knowledge of Galactic polarized emission at microwave frequencies, including synchrotron and thermal dust emission. We use existing data and our understanding of the physical behavior of the sources of foreground emission to generate sky templates, and start to assess how well primordial gravitational wave signals can be separated from foreground contaminants for a CMBPol mission. At the estimated foreground minimum of ~100 GHz, the polarized foregrounds are expected to be lower than a primordial polarization signal with tensor-to-scalar ratio r=0.01, in a small patch (~1%) of the sky known to have low Galactic emission. Over 75% of the sky we expect the foreground amplitude to exceed the primordial signal by about a factor of eight at the foreground minimum and on scales of two degrees. Only on the largest scales does the polarized foreground amplitude exceed the primordial signal by a larger factor of about 20. The prospects for detecting an r=0.01 signal including degree-scale measurements appear promising, with 5 sigma_r ~0.003 forecast from multiple methods. A mission that observes a range of scales offers better prospects from the foregrounds perspective than one targeting only the lowest few multipoles. We begin to explore how optimizing the composition of frequency channels in the focal plane can maximize our ability to perform component separation, with a range of typically 40 < nu < 300 GHz preferred for ten channels. Foreground cleaning methods are already in place to tackle a CMBPol mission data set, and further investigation of the optimization and detectability of the primordial signal will be useful for mission design.Comment: 42 pages, 14 figures, Foreground Removal Working Group contribution to the CMBPol Mission Concept Study, v2, matches AIP versio

    AI is a viable alternative to high throughput screening: a 318-target study

    Get PDF
    : High throughput screening (HTS) is routinely used to identify bioactive small molecules. This requires physical compounds, which limits coverage of accessible chemical space. Computational approaches combined with vast on-demand chemical libraries can access far greater chemical space, provided that the predictive accuracy is sufficient to identify useful molecules. Through the largest and most diverse virtual HTS campaign reported to date, comprising 318 individual projects, we demonstrate that our AtomNet® convolutional neural network successfully finds novel hits across every major therapeutic area and protein class. We address historical limitations of computational screening by demonstrating success for target proteins without known binders, high-quality X-ray crystal structures, or manual cherry-picking of compounds. We show that the molecules selected by the AtomNet® model are novel drug-like scaffolds rather than minor modifications to known bioactive compounds. Our empirical results suggest that computational methods can substantially replace HTS as the first step of small-molecule drug discovery

    Feeding the Second Screen: Semantic Linking based on Subtitles

    No full text
    Television is changing. Increasingly, broadcasts are consumed interactively. This allows broadcasters to provide consumers with additional background information that they may bookmark for later consumption. To support this type of functionality, we consider the task of linking a textual streams derived from live broadcasts to Wikipedia. While link generation has received considerable attention in recent years, our task has unique demands that require an approach that needs to (i) be high-precision oriented, (ii) perform in real-time, (iii) work in a streaming setting, and (iv) typically, with a very limited context. We propose a learning to rerank approach that significantly improves over a strong baseline in terms of effectiveness and whose processing time is very short. We extend this approach, leveraging the streaming nature of the textual sources that we link by modeling context as a graph. We show how our graph-based context model further improves effectiveness. For evaluation purposes we create a dataset of segments of television subtitles that we make available to the research community
    corecore