563 research outputs found

    The ALICE EMCal L1 trigger first year of operation experience

    Full text link
    The ALICE experiment at the LHC is equipped with an electromagnetic calorimeter (EMCal) designed to enhance its capabilities for jet, photon and electron measurement. In addition, the EMCal enables triggering on jets and photons with a centrality dependent energy threshold. After its commissioning in 2010, the EMCal Level 1 (L1) trigger was officially approved for physics data taking in 2011. After describing the L1 hardware and trigger algorithms, the commissioning and the first year of running experience, both in proton and heavy ion beams, are reviewed. Additionally, the upgrades to the original L1 trigger design are detailed.Comment: Proceedings of TWEPP-12, Oxford. 10 pages, 9 figure

    Level-1 jet trigger hardware for the ALICE electromagnetic calorimeter at LHC

    Full text link
    The ALICE experiment at the LHC is equipped with an electromagnetic calorimeter (EMCal) designed to enhance its capabilities for jet measurement. In addition, the EMCal enables triggering on high energy jets. Based on the previous development made for the Photon Spectrometer (PHOS) level-0 trigger, a specific electronic upgrade was designed in order to allow fast triggering on high energy jets (level-1). This development was made possible by using the latest generation of FPGAs which can deal with the instantaneous incoming data rate of 26 Gbit/s and process it in less than 4 {\mu}s.Comment: proceeding of TWEPP-10 at Aachen. 6 pages, 4 figure

    Prediction of gene–phenotype associations in humans, mice, and plants using phenologs

    Get PDF
    All authors are with the Center for Systems and Synthetic Biology, Institute for Cellular and Molecular Biology, The University of Texas at Austin, Austin, TX 78712, USA. -- Ulf Martin Singh-Blom is with the Program in Computational and Applied Mathematics, The University of Texas at Austin, Austin, TX 78712, USA, and th Unit of Computational Medicine, Department of Medicine, Karolinska Institutet, Stockholm 171 76, Sweden. -- Kriston L. McGary is with the Department of Biological Sciences, Vanderbilt University, Nashville, TN 37235, USA.Background: Phenotypes and diseases may be related to seemingly dissimilar phenotypes in other species by means of the orthology of underlying genes. Such “orthologous phenotypes,” or “phenologs,” are examples of deep homology, and may be used to predict additional candidate disease genes. Results: In this work, we develop an unsupervised algorithm for ranking phenolog-based candidate disease genes through the integration of predictions from the k nearest neighbor phenologs, comparing classifiers and weighting functions by cross-validation. We also improve upon the original method by extending the theory to paralogous phenotypes. Our algorithm makes use of additional phenotype data — from chicken, zebrafish, and E. coli, as well as new datasets for C. elegans — establishing that several types of annotations may be treated as phenotypes. We demonstrate the use of our algorithm to predict novel candidate genes for human atrial fibrillation (such as HRH2, ATP4A, ATP4B, and HOPX) and epilepsy (e.g., PAX6 and NKX2-1). We suggest gene candidates for pharmacologically-induced seizures in mouse, solely based on orthologous phenotypes from E. coli. We also explore the prediction of plant gene–phenotype associations, as for the Arabidopsis response to vernalization phenotype. Conclusions: We are able to rank gene predictions for a significant portion of the diseases in the Online Mendelian Inheritance in Man database. Additionally, our method suggests candidate genes for mammalian seizures based only on bacterial phenotypes and gene orthology. We demonstrate that phenotype information may come from diverse sources, including drug sensitivities, gene ontology biological processes, and in situ hybridization annotations. Finally, we offer testable candidates for a variety of human diseases, plant traits, and other classes of phenotypes across a wide array of species.Center for Systems and Synthetic BiologyInstitute for Cellular and Molecular [email protected]

    Mining Patents with Large Language Models Demonstrates Congruence of Functional Labels and Chemical Structures

    Full text link
    Predicting chemical function from structure is a major goal of the chemical sciences, from the discovery and repurposing of novel drugs to the creation of new materials. Recently, new machine learning algorithms are opening up the possibility of general predictive models spanning many different chemical functions. Here, we consider the challenge of applying large language models to chemical patents in order to consolidate and leverage the information about chemical functionality captured by these resources. Chemical patents contain vast knowledge on chemical function, but their usefulness as a dataset has historically been neglected due to the impracticality of extracting high-quality functional labels. Using a scalable ChatGPT-assisted patent summarization and word-embedding label cleaning pipeline, we derive a Chemical Function (CheF) dataset, containing 100K molecules and their patent-derived functional labels. The functional labels were validated to be of high quality, allowing us to detect a strong relationship between functional label and chemical structural spaces. Further, we find that the co-occurrence graph of the functional labels contains a robust semantic structure, which allowed us in turn to examine functional relatedness among the compounds. We then trained a model on the CheF dataset, allowing us to assign new functional labels to compounds. Using this model, we were able to retrodict approved Hepatitis C antivirals, uncover an antiviral mechanism undisclosed in the patent, and identify plausible serotonin-related drugs. The CheF dataset and associated model offers a promising new approach to predict chemical functionality.Comment: Under revie

    Dynamical mechanism of atrial fibrillation: a topological approach

    Get PDF
    While spiral wave breakup has been implicated in the emergence of atrial fibrillation, its role in maintaining this complex type of cardiac arrhythmia is less clear. We used the Karma model of cardiac excitation to investigate the dynamical mechanisms that sustain atrial fibrillation once it has been established. The results of our numerical study show that spatiotemporally chaotic dynamics in this regime can be described as a dynamical equilibrium between topologically distinct types of transitions that increase or decrease the number of wavelets, in general agreement with the multiple wavelets hypothesis. Surprisingly, we found that the process of continuous excitation waves breaking up into discontinuous pieces plays no role whatsoever in maintaining spatiotemporal complexity. Instead this complexity is maintained as a dynamical balance between wave coalescence -- a unique, previously unidentified, topological process that increases the number of wavelets -- and wave collapse -- a different topological process that decreases their number.Comment: 15 pages, 14 figure

    NcPred for accurate nuclear protein prediction using n-mer statistics with various classification algorithms

    Get PDF
    Prediction of nuclear proteins is one of the major challenges in genome annotation. A method, NcPred is described, for predicting nuclear proteins with higher accuracy exploiting n-mer statistics with different classification algorithms namely Alternating Decision (AD) Tree, Best First (BF) Tree, Random Tree and Adaptive (Ada) Boost. On BaCello dataset [1], NcPred improves about 20% accuracy with Random Tree and about 10% sensitivity with Ada Boost for Animal proteins compared to existing techniques. It also increases the accuracy of Fungal protein prediction by 20% and recall by 4% with AD Tree. In case of Human protein, the accuracy is improved by about 25% and sensitivity about 10% with BF Tree. Performance analysis of NcPred clearly demonstrates its suitability over the contemporary in-silico nuclear protein classification research

    Second best toll and capacity optimisation in network: solution algorithm and policy implications

    Get PDF
    This paper looks at the first and second-best jointly optimal toll and road capacity investment problems from both policy and technical oriented perspectives. On the technical side, the paper investigates the applicability of the constraint cutting algorithm for solving the second-best problem under elastic demand which is formulated as a bilevel programming problem. The approach is shown to perform well despite several problems encountered by our previous work in Shepherd and Sumalee (2004). The paper then applies the algorithm to a small sized network to investigate the policy implications of the first and second-best cases. This policy analysis demonstrates that the joint first best structure is to invest in the most direct routes while reducing capacities elsewhere. Whilst unrealistic this acts as a useful benchmark. The results also show that certain second best policies can achieve a high proportion of the first best benefits while in general generating a revenue surplus. We also show that unless costs of capacity are known to be low then second best tolls will be affected and so should be analysed in conjunction with investments in the network
    corecore