15 research outputs found

    FastEpistasis: a high performance computing solution for quantitative trait epistasis

    Get PDF
    Motivation: Genome-wide association studies have become widely used tools to study effects of genetic variants on complex diseases. While it is of great interest to extend existing analysis methods by considering interaction effects between pairs of loci, the large number of possible tests presents a significant computational challenge. The number of computations is further multiplied in the study of gene expression quantitative trait mapping, in which tests are performed for thousands of gene phenotypes simultaneously. Results: We present FastEpistasis, an efficient parallel solution extending the PLINK epistasis module, designed to test for epistasis effects when analyzing continuous phenotypes. Our results show that the algorithm scales with the number of processors and offers a reduction in computation time when several phenotypes are analyzed simultaneously. FastEpistasis is capable of testing the association of a continuous trait with all single nucleotide polymorphism (SNP) pairs from 500 000 SNPs, totaling 125 billion tests, in a population of 5000 individuals in 29, 4 or 0.5 days using 8, 64 or 512 processors. Availability: FastEpistasis is open source and available free of charge only for non-commercial users from http://www.vital-it.ch/software/FastEpistasis Contact: [email protected] Supplementary information: Supplementary data are available at Bioinformatics onlin

    FastEpistasis: a high performance computing solution for quantitative trait epistasis.

    Get PDF
    Motivation: Genome-wide association studies have become widely used tools to study effects of genetic variants on complex diseases. While it is of great interest to extend existing analysis methods by considering interaction effects between pairs of loci, the large number of possible tests presents a significant computational challenge. The number of computations is further multiplied in the study of gene expression quantitative trait mapping, in which tests are performed for thousands of gene phenotypes simultaneously. Results: We present FastEpistasis, an efficient parallel solution extending the PLINK epistasis module, designed to test for epistasis effects when analyzing continuous phenotypes. Our results show that the algorithm scales with the number of processors and offers a reduction in computation time when several phenotypes are analyzed simultaneously. FastEpistasis is capable of testing the association of a continuous trait with all single nucleotide polymorphism ( SNP) pairs from 500 000 SNPs, totaling 125 billion tests, in a population of 5000 individuals in 29, 4 or 0.5 days using 8, 64 or 512 processors

    Comparison of Strategies to Detect Epistasis from eQTL Data

    Get PDF
    Genome-wide association studies have been instrumental in identifying genetic variants associated with complex traits such as human disease or gene expression phenotypes. It has been proposed that extending existing analysis methods by considering interactions between pairs of loci may uncover additional genetic effects. However, the large number of possible two-marker tests presents significant computational and statistical challenges. Although several strategies to detect epistasis effects have been proposed and tested for specific phenotypes, so far there has been no systematic attempt to compare their performance using real data. We made use of thousands of gene expression traits from linkage and eQTL studies, to compare the performance of different strategies. We found that using information from marginal associations between markers and phenotypes to detect epistatic effects yielded a lower false discovery rate (FDR) than a strategy solely using biological annotation in yeast, whereas results from human data were inconclusive. For future studies whose aim is to discover epistatic effects, we recommend incorporating information about marginal associations between SNPs and phenotypes instead of relying solely on biological annotation. Improved methods to discover epistatic effects will result in a more complete understanding of complex genetic effects

    Numerical study of models of quantum chaos

    No full text
    This work is dedicated to the study of models of quantum chaos. It is clear that one cannot define chaos in quantum mechanics the same way it is defined in classical dynamics. The classical definition of chaos relies upon the ideas of sensitive dependence on initial conditions and exponentially diverging trajectories. This type of definition is simply not possible in quantum mechanics. One approach is to ignore the question of defining chaos and to concentrate on identifying features of quantum systems that correspond to chaos in classical systems. A great deal of progress has been made along these lines in the second part of the last century and Random Matrix Theory (RMT), created by Wigner, was found to statistically described the spectrum of quantum systems whose classical counterpart is chaotic. In contrast, regular systems obey a Poissonian nearest neighbor level distribution. But most studies have focused on simple systems in the semi-classical limit ℏ → 0. No random matrix theory model is presently known to reproduce the measured statistics of mixed systems (with phase space containing both regular and chaotic components). These mixed systems are not the exception but rather correspond to the usual situation in physical examples. One of the motivation of this work is to test the validity of a class of RMT models to describe the statistics of mixed systems: the Porter-Rosenzweig model (PR model). To this sake we look at two cases: quantum graphs which are quite useful because of their simplicity and the hydrogen atom in uniform magnetic field because of its obvious interest. We observe a crossover from the Poisson statistics to the Wigner's one in the chaotic region. Close to the ionization threshold, a crystalline structure of energy levels is revealed. We think that these crossover effects are important in more general situations and in a sense are characteristic of mixed systems. The first part of this dissertation is concerned with the physics of the problems. We compute the statistical behavior of energy levels at short and long range and compare with the PR model. There exist very few results on the statistical functions of the PR model in the case of interest. Its underlying complexity makes it very difficult to achieve simple analysis without approximations. Although some results exist for time-irreversal Hamiltonians, the case of time-reversal invariance, which one encounters more often, is more challenging. It is virtually solved but in its present form not usefully exploitable. For the case of the hydrogen atom in uniform magnetic field, we develop a new technique to compute the integrated density of states up to third order. To the best of our knowledge, this has never been done before. We further demonstrate that the first order term, namely the Thomas-Fermi approximation, is not sufficient to faithfully describe numerical results. At last, the question of the wave packet dynamics is addressed by looking at the survival probability of a given initial state. Prediction of RMT are compared with the numerical results in the chaotic region. In agreement with RMT, it is shown that the histogram of the survival probability at fixed time is independent of the initial state and is exponential. This is fully done for quantum graphs, whereas for the hydrogen atom in uniform magnetic field only limited results have been obtained. The second part is dedicated to the numerical aspects and theories linked to the solution of the hydrogen atom in uniform magnetic field using an engineering method, namely the Finite Elements Method (FEM). Emphasis will be given to the unusual requirement of high precision within a dense part of the spectrum. It is indeed quite unusual to apply FEM in the field of quantum physics where spectral decomposition techniques are dominating. Although computing the highly excited energies of the hydrogen atom in uniform magnetic field is especially difficult, we shall demonstrate that one can achieve a high degree of accuracy. Furthermore, the technique is not restricted to the extremely high or low magnetic field strength but contrary to spectral decomposition techniques, can be applied to the entire range

    Do I need a robot or a nonrobot automated system?

    No full text
    In manufacturing automation, the current focus is on integration at enterprise level. This however has not reduced the need for automation on the shopfloor. On the contrary, the latter has become more pressing. On the shopfloor, major manufacturing costs are related to handling and assembly operations. Difficulties in automating those operations are twofold. First, there is usually a line beyond which manual operation remains the most economical solution today. And then for each application, the proper level ofsophistication in terms of automation technology should be adopted. The paper concentrates on the second difficulty. Most often, products and processes can be designed so as to require fairly simple automated systems for their manufacturing (on/off devices, programmable logic controllers, independent actuators, etc.). But when the applicationfeatures significant position and/or orientation uncertainties (mathematical space of dimension 3 or more), some kind of perception is required to cope with them, and this usually results in adaptive workpiece or tool trajectories. Robotics provide unmatched solutions for such multi-dimensional, coordinated motions. General guidelines are introduced in order t o select the appropriate type of component for automation. Then, two case studies follow. In both cases, the application is complex, in terms of parameter variability. Many examples are given of correspondence between given guidelines and concrete, low-level details. In particular, while in the first case automated solutions can be devised with traditional, multiple independent servoed motions and general-purpose computers, in the second one, industrial robotics with dedicated controllers provide the right answer

    Machine learning using the extreme gradient boosting (XGBoost) algorithm predicts 5-day delta of SOFA score at ICU admission in COVID-19 patients

    Get PDF
    Background: Accurate risk stratification of critically ill patients with coronavirus disease 2019 (COVID-19) is essential for optimizing resource allocation, delivering targeted interventions, and maximizing patient survival probability. Machine learning (ML) techniques are attracting increased interest for the development of prediction models as they excel in the analysis of complex signals in data-rich environments such as critical care. Methods: We retrieved data on patients with COVID-19 admitted to an intensive care unit (ICU) between March and October 2020 from the RIsk Stratification in COVID-19 patients in the Intensive Care Unit (RISC-19-ICU) registry. We applied the Extreme Gradient Boosting (XGBoost) algorithm to the data to predict as a binary out�come the increase or decrease in patients’ Sequential Organ Failure Assessment (SOFA) score on day 5 after ICU admission. The model was iteratively cross-validated in different subsets of the study cohort. Results: The final study population consisted of 675 patients. The XGBoost model correctly predicted a decrease in SOFA score in 320/385 (83%) critically ill COVID-19 patients, and an increase in the score in 210/290 (72%) patients. The area under the mean receiver operating characteristic curve for XGBoost was significantly higher than that for the logistic regression model (0.86 vs. 0.69, P < 0.01 [paired t-test with 95% confidence interval]). Conclusions: The XGBoost model predicted the change in SOFA score in critically ill COVID-19 patients admitted to the ICU and can guide clinical decision support systems (CDSSs) aimed at optimizing available resource

    AntibioticScout.ch: Eine Entscheidungshilfe für den umsichtigen Einsatz von antimikrobiellen Wirkstoffen: Anwendung in der Kleintiermedizin

    No full text
    INTRODUCTION Bacterial resistances to antimicrobial drugs pose serious public health challenges. The observed increase of resistances is attributed to the uncontrolled, massive and often unnecessary administration of antibiotics both in human and veterinary medicine. To support the responsible use of antimicrobials in animals and help veterinarians selecting the most suitable antimicrobial drugs, we developed the AntibioticScout.ch as a comprehensive decision supporting tool providing online access to the current knowledge of rational antibiotic prescription practices. User-friendly search functions allow for the fast and efficient retrieval of information that is structured in this database by animal species, organ systems and therapeutic indications. In addition, an online form allows to report treatment failures in order to identify problematic cases as well as ensuing risks and take appropriate mitigation measures. The present report describes the workflow of this decision support system applied to the prudent use of antimicrobials in companion animal medicine
    corecore