28,590 research outputs found

    Entropy of leukemia on multidimensional morphological and molecular landscapes

    Full text link
    Leukemia epitomizes the class of highly complex diseases that new technologies aim to tackle by using large sets of single-cell level information. Achieving such goal depends critically not only on experimental techniques but also on approaches to interpret the data. A most pressing issue is to identify the salient quantitative features of the disease from the resulting massive amounts of information. Here, I show that the entropies of cell-population distributions on specific multidimensional molecular and morphological landscapes provide a set of measures for the precise characterization of normal and pathological states, such as those corresponding to healthy individuals and acute myeloid leukemia (AML) patients. I provide a systematic procedure to identify the specific landscapes and illustrate how, applied to cell samples from peripheral blood and bone marrow aspirates, this characterization accurately diagnoses AML from just flow cytometry data. The methodology can generally be applied to other types of cell-populations and establishes a straightforward link between the traditional statistical thermodynamics methodology and biomedical applications.Comment: 15 pages, 4 figures, and supplementary informatio

    RACS: Rapid Analysis of ChIP-Seq data for contig based genomes

    Full text link
    Background: Chromatin immunoprecipitation coupled to next generation sequencing (ChIP-Seq) is a widely used technique to investigate the function of chromatin-related proteins in a genome-wide manner. ChIP-Seq generates large quantities of data which can be difficult to process and analyse, particularly for organisms with contig based genomes. Contig-based genomes often have poor annotations for cis-elements, for example enhancers, that are important for gene expression. Poorly annotated genomes make a comprehensive analysis of ChIP-Seq data difficult and as such standardized analysis pipelines are lacking. Methods: We report a computational pipeline that utilizes traditional High-Performance Computing techniques and open source tools for processing and analysing data obtained from ChIP-Seq. We applied our computational pipeline "Rapid Analysis of ChIP-Seq data" (RACS) to ChIP-Seq data that was generated in the model organism Tetrahymena thermophila, an example of an organism with a genome that is available in contigs. Results: To test the performance and efficiency of RACs, we performed control ChIP-Seq experiments allowing us to rapidly eliminate false positives when analyzing our previously published data set. Our pipeline segregates the found read accumulations between genic and intergenic regions and is highly efficient for rapid downstream analyses. Conclusions: Altogether, the computational pipeline presented in this report is an efficient and highly reliable tool to analyze genome-wide ChIP-Seq data generated in model organisms with contig-based genomes. RACS is an open source computational pipeline available to download from: https://bitbucket.org/mjponce/racs --or-- https://gitrepos.scinet.utoronto.ca/public/?a=summary&p=RACSComment: Submitted to BMC Bioinformatics. Computational pipeline available at https://bitbucket.org/mjponce/rac

    PoliSave: Efficient Power Management of Campus PCs

    Get PDF
    In this paper we study the power consumption of networked devices in a large Campus network, focusing mainly on PC usage. We first define a methodology to monitor host power state, which we then apply to our Campus network. Results show that typically people refrain from turning off their PC during non-working hours so that more than 1500 PCs are always powered on, causing a large energy waste. We then design PoliSave, a simple web-based architecture which allows users to schedule power state of their PCs, avoiding the frustration of wasting long power-down and bootstrap times of today PCs. By exploiting already available technologies like Wake-On-Lan, Hibernation and Web services, PoliSave reduces the average PC uptime from 15.9h to 9.7h during working days, generating an energy saving of 0.6kW/h per PC per day, or a saving of more than 250,000 Euros per year considering our Campus Universit

    FIESTA 2: parallelizeable multiloop numerical calculations

    Full text link
    The program FIESTA has been completely rewritten. Now it can be used not only as a tool to evaluate Feynman integrals numerically, but also to expand Feynman integrals automatically in limits of momenta and masses with the use of sector decompositions and Mellin-Barnes representations. Other important improvements to the code are complete parallelization (even to multiple computers), high-precision arithmetics (allowing to calculate integrals which were undoable before), new integrators and Speer sectors as a strategy, the possibility to evaluate more general parametric integrals.Comment: 31 pages, 5 figure

    The Globe Infrastructure Directory Service

    Get PDF
    To implement adaptive replication strategies for Web documents, we have developed a wide area resource management system. This system allows servers to be managed on a local and global level. On a local level the system manages information about the resources and services provided by the servers, while on a global level the system allows servers to be searched for, added to, and removed from the system. As part of the system, and also in order to implement adaptive replication strategies, we introduce a hierarchical location representation for network elements such as servers, objects, and clients. This location representation allows us to easily and efficiently find and group network elements based on their location in a worldwide network. Our resource management system can be implemented using standard Internet technologies and has a broader range of applications besides making adaptive replication strategies possible for Web documents

    An Eprints Apache Log Filter for Non-Redundant Document Downloads by Browser Agents

    Get PDF
    Web log files record a vast amount of information and much of it just gets in the way of meaningful observational studies on usage. It is therefore necessary to filter out the junk in a deliberate way before making statements on how the web is being used. This report describes the methods and scripts used to accomplish apache web log filtering and report generation. It is open to scrutiny and freely available for others to use

    Important Lessons Derived from X.500 Case Studies

    Get PDF
    X.500 is a new and complex electronic directory technology, whose basic specification was first published as an international standard in 1988, with an enhanced revision in 1993. The technology is still unproven in many organisations. This paper presents case studies of 15 pioneering pilot and operational X.500 based directory services. The paper provides valuable insights into how organisations are coming to understand this new technology, are using X.500 for both traditional and novel directory based services, and consequently are deriving benefits from it. Important lessons that have been learnt by these X.500 pioneers are presented here, so that future organisations can benefit from their experiences. Factors critical to the success of implementing X.500 in an organisation are derived from the studies

    Yambo: an \textit{ab initio} tool for excited state calculations

    Full text link
    {\tt yambo} is an {\it ab initio} code for calculating quasiparticle energies and optical properties of electronic systems within the framework of many-body perturbation theory and time-dependent density functional theory. Quasiparticle energies are calculated within the GWGW approximation for the self-energy. Optical properties are evaluated either by solving the Bethe--Salpeter equation or by using the adiabatic local density approximation. {\tt yambo} is a plane-wave code that, although particularly suited for calculations of periodic bulk systems, has been applied to a large variety of physical systems. {\tt yambo} relies on efficient numerical techniques devised to treat systems with reduced dimensionality, or with a large number of degrees of freedom. The code has a user-friendly command-line based interface, flexible I/O procedures and is interfaced to several publicly available density functional ground-state codes.Comment: This paper describes the features of the Yambo code, whose source is available under the GPL license at www.yambo-code.or
    corecore