6,993 research outputs found

    Weak lensing minima and peaks: Cosmological constraints and the impact of baryons

    Get PDF
    We present a novel statistic to extract cosmological information in weak lensing data: the lensing minima. We also investigate the effect of baryons on the cosmological constraints from peak and minimum counts. Using the \texttt{MassiveNuS} simulations, we find that lensing minima are sensitive to non-Gaussian cosmological information and are complementary to the lensing power spectrum and peak counts. For an LSST-like survey, we obtain 95%95\% credible intervals from a combination of lensing minima and peaks that are significantly stronger than from the power spectrum alone, by 44%44\%, 11%11\%, and 63%63\% for the neutrino mass sum ∑mÎœ\sum m_\nu, matter density Ωm\Omega_m, and amplitude of fluctuation AsA_s, respectively. We explore the effect of baryonic processes on lensing minima and peaks using the hydrodynamical simulations \texttt{BAHAMAS} and \texttt{Osato15}. We find that ignoring baryonic effects would lead to strong (≈4σ\approx 4 \sigma) biases in inferences from peak counts, but negligible (≈0.5σ\approx 0.5 \sigma) for minimum counts, suggesting lensing minima are a potentially more robust tool against baryonic effects. Finally, we demonstrate that the biases can in principle be mitigated without significantly degrading cosmological constraints when we model and marginalize the baryonic effects.UK Science and Technology Facilities Council (grant number ST/N000927/1)

    Flux of Atmospheric Neutrinos

    Get PDF
    Atmospheric neutrinos produced by cosmic-ray interactions in the atmosphere are of interest for several reasons. As a beam for studies of neutrino oscillations they cover a range of parameter space hitherto unexplored by accelerator neutrino beams. The atmospheric neutrinos also constitute an important background and calibration beam for neutrino astronomy and for the search for proton decay and other rare processes. Here we review the literature on calculations of atmospheric neutrinos over the full range of energy, but with particular attention to the aspects important for neutrino oscillations. Our goal is to assess how well the properties of atmospheric neutrinos are known at present.Comment: 68 pages, 26 figures. With permission from the Annual Review of Nuclear & Particle Science. Final version of this material is scheduled to appear in the Annual Review of Nuclear & Particle Science Vol. 52, to be published in December 2002 by Annual Reviews (http://annualreviews.org

    Three applications of path integrals: equilibrium and kinetic isotope effects, and the temperature dependence of the rate constant of the [1,5] sigmatropic hydrogen shift in (Z)-1,3-pentadiene

    Get PDF
    Recent experiments have confirmed the importance of nuclear quantum effects even in large biomolecules at physiological temperature. Here we describe how the path integral formalism can be used to describe rigorously the nuclear quantum effects on equilibrium and kinetic properties of molecules. Specifically, we explain how path integrals can be employed to evaluate the equilibrium (EIE) and kinetic (KIE) isotope effects, and the temperature dependence of the rate constant. The methodology is applied to the [1,5] sigmatropic hydrogen shift in pentadiene. Both the KIE and the temperature dependence of the rate constant confirm the importance of tunneling and other nuclear quantum effects as well as of the anharmonicity of the potential energy surface. Moreover, previous results on the KIE were improved by using a combination of a high level electronic structure calculation within the harmonic approximation with a path integral anharmonicity correction using a lower level method.Comment: 9 pages, 4 figure

    Frontiers and Opportunities: Highlights of the 2nd Annual Conference of the Chinese Antibody Society

    Get PDF
    The Chinese Antibody Society (CAS) convened the second annual conference in Cambridge, MA, USA on 29 April 2018. More than 600 members from around the world attended the meeting. Invited speakers discussed the latest advancements in therapeutic antibodies with an emphasis on the progress made in China. The meeting covered a vast variety of topics including the current status of therapeutic antibodies, the progress of immuno-oncology, and biosimilars in China. The conference presentations also included the development of several novel antibodies such as antibodies related to weight loss, T-cell receptor-mimicking antibodies that target intracellular antigens, and tumor-targeting antibodies that utilize both innate and adaptive immune pathways. At the meeting, the CAS announced the launch of its official journal-Antibody Therapeutics-in collaboration with Oxford University Press. The conference was concluded by a panel discussion on how to bring a therapeutic drug developed in China to the USA for clinical trials

    DecGPU: distributed error correction on massively parallel graphics processing units using CUDA and MPI

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Next-generation sequencing technologies have led to the high-throughput production of sequence data (reads) at low cost. However, these reads are significantly shorter and more error-prone than conventional Sanger shotgun reads. This poses a challenge for the <it>de novo </it>assembly in terms of assembly quality and scalability for large-scale short read datasets.</p> <p>Results</p> <p>We present DecGPU, the first parallel and distributed error correction algorithm for high-throughput short reads (HTSRs) using a hybrid combination of CUDA and MPI parallel programming models. DecGPU provides CPU-based and GPU-based versions, where the CPU-based version employs coarse-grained and fine-grained parallelism using the MPI and OpenMP parallel programming models, and the GPU-based version takes advantage of the CUDA and MPI parallel programming models and employs a hybrid CPU+GPU computing model to maximize the performance by overlapping the CPU and GPU computation. The distributed feature of our algorithm makes it feasible and flexible for the error correction of large-scale HTSR datasets. Using simulated and real datasets, our algorithm demonstrates superior performance, in terms of error correction quality and execution speed, to the existing error correction algorithms. Furthermore, when combined with Velvet and ABySS, the resulting DecGPU-Velvet and DecGPU-ABySS assemblers demonstrate the potential of our algorithm to improve <it>de novo </it>assembly quality for <it>de</it>-<it>Bruijn</it>-graph-based assemblers.</p> <p>Conclusions</p> <p>DecGPU is publicly available open-source software, written in CUDA C++ and MPI. The experimental results suggest that DecGPU is an effective and feasible error correction algorithm to tackle the flood of short reads produced by next-generation sequencing technologies.</p

    How do the grain size characteristics of a tephra deposit change over time?

    Get PDF
    Financial support was provided by the National Science Foundation of America through grant 1202692 ‘Comparative Island Ecodynamics in the North Atlantic’ and grant 1249313 ‘Tephra layers and early warning signals for critical transitions’ (both to AJD).Volcanologists frequently use grain size distributions (GSDs) in tephra layers to infer eruption parameters. However, for long-past eruptions, the accuracy of the reconstruction depends upon the correspondence between the initial tephra deposit and preserved tephra layer on which inferences are based. We ask: how closely does the GSD of a decades-old tephra layer resemble the deposit from which it originated? We addressed this question with a study of the tephra layer produced by the eruption of Mount St Helens, USA, in May 1980. We compared grain size distributions from the fresh, undisturbed tephra with grain size measurements from the surviving tephra layer. We found that the overall grain size characteristics of the tephra layer were similar to the original deposit, and that distinctive features identified by earlier authors had been preserved. However, detailed analysis of our samples showed qualitative differences, specifically a loss of fine material (which we attributed to ‘winnowing’). Understanding how tephra deposits are transformed over time is critical to efforts to reconstruct past eruptions, but inherently difficult to study. We propose long-term, tephra application experiments as a potential way forward.Publisher PDFPeer reviewe

    No imminent quantum supremacy by boson sampling

    Get PDF
    It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of photons in linear optics, which has sparked interest as a rapid way to demonstrate this quantum supremacy. Photon statistics are governed by intractable matrix functions known as permanents, which suggests that sampling from the distribution obtained by injecting photons into a linear-optical network could be solved more quickly by a photonic experiment than by a classical computer. The contrast between the apparently awesome challenge faced by any classical sampling algorithm and the apparently near-term experimental resources required for a large boson sampling experiment has raised expectations that quantum supremacy by boson sampling is on the horizon. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. While the largest boson sampling experiments reported so far are with 5 photons, our classical algorithm, based on Metropolised independence sampling (MIS), allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. We argue that the impact of experimental photon losses means that demonstrating quantum supremacy by boson sampling would require a step change in technology.Comment: 25 pages, 9 figures. Comments welcom

    Topoisomer Differentiation of Molecular Knots by FTICR MS: Lessons from Class II Lasso Peptides

    Full text link
    Lasso peptides constitute a class of bioactive peptides sharing a knotted structure where the C-terminal tail of the peptide is threaded through and trapped within an N-terminalmacrolactamring. The structural characterization of lasso structures and differentiation from their unthreaded topoisomers is not trivial and generally requires the use of complementary biochemical and spectroscopic methods. Here we investigated two antimicrobial peptides belonging to the class II lasso peptide family and their corresponding unthreaded topoisomers: microcin J25 (MccJ25), which is known to yield two-peptide product ions specific of the lasso structure under collisioninduced dissociation (CID), and capistruin, for which CID does not permit to unambiguously assign the lasso structure. The two pairs of topoisomers were analyzed by electrospray ionization Fourier transform ion cyclotron resonance mass spectrometry (ESI-FTICR MS) upon CID, infrared multiple photon dissociation (IRMPD), and electron capture dissociation (ECD). CID and ECDspectra clearly permitted to differentiate MccJ25 from its non-lasso topoisomer MccJ25-Icm, while for capistruin, only ECD was informative and showed different extent of hydrogen migration (formation of c\bullet/z from c/z\bullet) for the threaded and unthreaded topoisomers. The ECD spectra of the triply-charged MccJ25 and MccJ25-lcm showed a series of radical b-type product ions {\eth}b0In{\TH}. We proposed that these ions are specific of cyclic-branched peptides and result from a dual c/z\bullet and y/b dissociation, in the ring and in the tail, respectively. This work shows the potentiality of ECD for structural characterization of peptide topoisomers, as well as the effect of conformation on hydrogen migration subsequent to electron capture

    Measuring Global Credibility with Application to Local Sequence Alignment

    Get PDF
    Computational biology is replete with high-dimensional (high-D) discrete prediction and inference problems, including sequence alignment, RNA structure prediction, phylogenetic inference, motif finding, prediction of pathways, and model selection problems in statistical genetics. Even though prediction and inference in these settings are uncertain, little attention has been focused on the development of global measures of uncertainty. Regardless of the procedure employed to produce a prediction, when a procedure delivers a single answer, that answer is a point estimate selected from the solution ensemble, the set of all possible solutions. For high-D discrete space, these ensembles are immense, and thus there is considerable uncertainty. We recommend the use of Bayesian credibility limits to describe this uncertainty, where a (1−α)%, 0≀α≀1, credibility limit is the minimum Hamming distance radius of a hyper-sphere containing (1−α)% of the posterior distribution. Because sequence alignment is arguably the most extensively used procedure in computational biology, we employ it here to make these general concepts more concrete. The maximum similarity estimator (i.e., the alignment that maximizes the likelihood) and the centroid estimator (i.e., the alignment that minimizes the mean Hamming distance from the posterior weighted ensemble of alignments) are used to demonstrate the application of Bayesian credibility limits to alignment estimators. Application of Bayesian credibility limits to the alignment of 20 human/rodent orthologous sequence pairs and 125 orthologous sequence pairs from six Shewanella species shows that credibility limits of the alignments of promoter sequences of these species vary widely, and that centroid alignments dependably have tighter credibility limits than traditional maximum similarity alignments
    • 

    corecore