79 research outputs found

    Darboux Transformations for a Lax Integrable System in 2n2n-Dimensions

    Full text link
    A 2n2n-dimensional Lax integrable system is proposed by a set of specific spectral problems. It contains Takasaki equations, the self-dual Yang-Mills equations and its integrable hierarchy as examples. An explicit formulation of Darboux transformations is established for this Lax integrable system. The Vandermonde and generalized Cauchy determinant formulas lead to a description for deriving explicit solutions and thus some rational and analytic solutions are obtained.Comment: Latex, 14 pages, to be published in Lett. Math. Phy

    Multi-parameter optimization: Development of a morpholin-3-one derivative with an improved kinetic profile for imaging monoacylglycerol lipase in the brain

    Get PDF
    Monoacylglycerol lipase (MAGL) is a gatekeeper in regulating endocannabinoid signaling and has gained sub-stantial attention as a therapeutic target for neurological disorders. We recently discovered a morpholin-3-one derivative as a novel scaffold for imaging MAGL via positron emission tomography (PET). However, its slow kinetics in vivo hampered the application. In this study, structural optimization was conducted and eleven novel MAGL inhibitors were designed and synthesized. Based on the results from MAGL inhibitory potency, in vitro metabolic stability and surface plasmon resonance assays, we identified compound 7 as a potential MAGL PET tracer candidate. [11C]7 was synthesized via direct 11CO2 fixation method and successfully mapped MAGL dis-tribution patterns on rodent brains in in vitro autoradiography. PET studies in mice using [11C]7 demonstrated its improved kinetic profile compared to the lead structure. Its high specificity in vivo was proved by using MAGL KO mice. Although further studies confirmed that [11C]7 is a P-glycoprotein (P-gp) substrate in mice, its low P-gp efflux ratio on cells transfected with human protein suggests that it should not be an issue for the clinical translation of [11C]7 as a novel reversible MAGL PET tracer in human subjects. Overall, [11C]7 ([11C] RO7284390) showed promising results warranting further clinical evaluation.Molecular Physiolog

    Integrating sequence and array data to create an improved 1000 Genomes Project haplotype reference panel

    Get PDF
    A major use of the 1000 Genomes Project (1000GP) data is genotype imputation in genome-wide association studies (GWAS). Here we develop a method to estimate haplotypes from low-coverage sequencing data that can take advantage of single-nucleotide polymorphism (SNP) microarray genotypes on the same samples. First the SNP array data are phased to build a backbone (or 'scaffold') of haplotypes across each chromosome. We then phase the sequence data 'onto' this haplotype scaffold. This approach can take advantage of relatedness between sequenced and non-sequenced samples to improve accuracy. We use this method to create a new 1000GP haplotype reference set for use by the human genetic community. Using a set of validation genotypes at SNP and bi-allelic indels we show that these haplotypes have lower genotype discordance and improved imputation performance into downstream GWAS samples, especially at low-frequency variants. © 2014 Macmillan Publishers Limited. All rights reserved

    The performance of the jet trigger for the ATLAS detector during 2011 data taking

    Get PDF
    The performance of the jet trigger for the ATLAS detector at the LHC during the 2011 data taking period is described. During 2011 the LHC provided proton–proton collisions with a centre-of-mass energy of 7 TeV and heavy ion collisions with a 2.76 TeV per nucleon–nucleon collision energy. The ATLAS trigger is a three level system designed to reduce the rate of events from the 40 MHz nominal maximum bunch crossing rate to the approximate 400 Hz which can be written to offline storage. The ATLAS jet trigger is the primary means for the online selection of events containing jets. Events are accepted by the trigger if they contain one or more jets above some transverse energy threshold. During 2011 data taking the jet trigger was fully efficient for jets with transverse energy above 25 GeV for triggers seeded randomly at Level 1. For triggers which require a jet to be identified at each of the three trigger levels, full efficiency is reached for offline jets with transverse energy above 60 GeV. Jets reconstructed in the final trigger level and corresponding to offline jets with transverse energy greater than 60 GeV, are reconstructed with a resolution in transverse energy with respect to offline jets, of better than 4 % in the central region and better than 2.5 % in the forward direction

    Search for the Standard Model Higgs boson decaying into bb¯ produced in association with top quarks decaying hadronically in pp collisions at √s = 8 TeV with the ATLAS detector

    Get PDF
    A search for Higgs boson production in association with a pair of top quarks (ttÂŻ H) is performed, where the Higgs boson decays to bbÂŻ, and both top quarks decay hadronically. The data used correspond to an integrated luminosity of 20.3 fb−1 of pp collisions at √s = 8 TeV collected with the ATLAS detector at the Large Hadron Collider. The search selects events with at least six energetic jets and uses a boosted decision tree algorithm to discriminate between signal and Standard Model background. The dominant multijet background is estimated using a dedicated data-driven technique. For a Higgs boson mass of 125 GeV, an upper limit of 6.4 (5.4) times the Standard Model cross section is observed (expected) at 95% confidence level. The best-fit value for the signal strength is ÎŒ = 1.6 ± 2.6 times the Standard Model expectation for mH = 125 GeV. Combining all ttÂŻ H searches carried out by ATLAS at √s = 8 and 7 TeV, an observed (expected) upper limit of 3.1 (1.4) times the Standard Model expectation is obtained at 95% confidence level, with a signal strength ÎŒ = 1.7 ± 0.8

    Computational pan-genomics: Status, promises and challenges

    Get PDF
    Many disciplines, from human genetics and oncology to plant breeding, microbiology and virology, commonly face the challenge of analyzing rapidly increasing numbers of genomes. In case of Homo sapiens, the number of sequenced genomes will approach hundreds of thousands in the next few years. Simply scaling up established bioinformatics pipelines will not be sufficient for leveraging the full potential of such rich genomic data sets. Instead, novel, qualitatively different Computational methods and paradigms are needed.We will witness the rapid extension of Computational pan-genomics, a new sub-area of research in Computational biology. In this article, we generalize existing definitions and understand a pangenome as any collection of genomic sequences to be analyzed jointly or to be used as a reference. We examine already available approaches to construct and use pan-genomes, discuss the potential benefits of future technologies and methodologies and review open challenges from the vantage point of the above-mentioned biological disciplines. As a prominent example for a Computational paradigm shift, we particularly highlight the transition from the representation of reference genomes as strings to representations

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    Get PDF
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    Search for the Higgs boson produced in association with a W boson and decaying to four b-quarks via two spin-zero particles in pp collisions at 13 TeV with the ATLAS detector

    Get PDF
    This paper presents a dedicated search for exotic decays of the Higgs boson to a pair of new spin-zero particles, H → aa, where the particle a decays to b-quarks and has a mass in the range of 20–60 GeV. The search is performed in events where the Higgs boson is produced in association with a W boson, giving rise to a signature of a lepton (electron or muon), missing transverse momentum, and multiple jets from b-quark decays. The analysis is based on the full dataset of pp collisions at √s = 13 TeV recorded in 2015 by the ATLAS detector at the CERN Large Hadron Collider, corresponding to an integrated luminosity of 3.2 fb−1. No significant excess of events above the Standard Model prediction is observed, and a 95% confidence-level upper limit is derived for the product of the production cross section for pp → W H times the branching ratio for the decay H → aa → 4b. The upper limit ranges from 6.2 pb for an a-boson mass ma = 20 GeV to 1.5 pb for ma = 60 GeV
    • 

    corecore