244 research outputs found

    Predictors of Successful Decannulation Using a Tracheostomy Retainer in Patients with Prolonged Weaning and Persisting Respiratory Failure

    Get PDF
    Background: For percutaneously tracheostomized patients with prolonged weaning and persisting respiratory failure, the adequate time point for safe decannulation and switch to noninvasive ventilation is an important clinical issue. Objectives: We aimed to evaluate the usefulness of a tracheostomy retainer (TR) and the predictors of successful decannulation. Methods: We studied 166 of 384 patients with prolonged weaning in whom a TR was inserted into a tracheostoma. Patients were analyzed with regard to successful decannulation and characterized by blood gas values, the duration of previous spontaneous breathing, Simplified Acute Physiology Score (SAPS) and laboratory parameters. Results: In 47 patients (28.3%) recannulation was necessary, mostly due to respiratory decompensation and aspiration. Overall, 80.6% of the patients could be liberated from a tracheostomy with the help of a TR. The need for recannulation was associated with a shorter duration of spontaneous breathing within the last 24/48 h (p < 0.01 each), lower arterial oxygen tension (p = 0.025), greater age (p = 0.025), and a higher creatinine level (p = 0.003) and SAPS (p < 0.001). The risk for recannulation was 9.5% when patients breathed spontaneously for 19-24 h within the 24 h prior to decannulation, but 75.0% when patients breathed for only 0-6 h without ventilatory support (p < 0.001). According to ROC analysis, the SAPS best predicted successful decannulation {[}AUC 0.725 (95% CI: 0.634-0.815), p < 0.001]. Recannulated patients had longer durations of intubation (p = 0.046), tracheostomy (p = 0.003) and hospital stay (p < 0.001). Conclusion: In percutaneously tracheostomized patients with prolonged weaning, the use of a TR seems to facilitate and improve the weaning process considerably. The duration of spontaneous breathing prior to decannulation, age and oxygenation describe the risk for recannulation in these patients. Copyright (c) 2012 S. Karger AG, Base

    MEIS2 Is an Adrenergic Core Regulatory Transcription Factor Involved in Early Initiation of TH-MYCN-Driven Neuroblastoma Formation.

    Full text link
    Roughly half of all high-risk neuroblastoma patients present with MYCN amplification. The molecular consequences of MYCN overexpression in this aggressive pediatric tumor have been studied for decades, but thus far, our understanding of the early initiating steps of MYCN-driven tumor formation is still enigmatic. We performed a detailed transcriptome landscaping during murine TH-MYCN-driven neuroblastoma tumor formation at different time points. The neuroblastoma dependency factor MEIS2, together with ASCL1, was identified as a candidate tumor-initiating factor and shown to be a novel core regulatory circuit member in adrenergic neuroblastomas. Of further interest, we found a KEOPS complex member (gm6890), implicated in homologous double-strand break repair and telomere maintenance, to be strongly upregulated during tumor formation, as well as the checkpoint adaptor Claspin (CLSPN) and three chromosome 17q loci CBX2, GJC1 and LIMD2. Finally, cross-species master regulator analysis identified FOXM1, together with additional hubs controlling transcriptome profiles of MYCN-driven neuroblastoma. In conclusion, time-resolved transcriptome analysis of early hyperplastic lesions and full-blown MYCN-driven neuroblastomas yielded novel components implicated in both tumor initiation and maintenance, providing putative novel drug targets for MYCN-driven neuroblastoma

    Coverage, Continuity and Visual Cortical Architecture

    Get PDF
    The primary visual cortex of many mammals contains a continuous representation of visual space, with a roughly repetitive aperiodic map of orientation preferences superimposed. It was recently found that orientation preference maps (OPMs) obey statistical laws which are apparently invariant among species widely separated in eutherian evolution. Here, we examine whether one of the most prominent models for the optimization of cortical maps, the elastic net (EN) model, can reproduce this common design. The EN model generates representations which optimally trade of stimulus space coverage and map continuity. While this model has been used in numerous studies, no analytical results about the precise layout of the predicted OPMs have been obtained so far. We present a mathematical approach to analytically calculate the cortical representations predicted by the EN model for the joint mapping of stimulus position and orientation. We find that in all previously studied regimes, predicted OPM layouts are perfectly periodic. An unbiased search through the EN parameter space identifies a novel regime of aperiodic OPMs with pinwheel densities lower than found in experiments. In an extreme limit, aperiodic OPMs quantitatively resembling experimental observations emerge. Stabilization of these layouts results from strong nonlocal interactions rather than from a coverage-continuity-compromise. Our results demonstrate that optimization models for stimulus representations dominated by nonlocal suppressive interactions are in principle capable of correctly predicting the common OPM design. They question that visual cortical feature representations can be explained by a coverage-continuity-compromise.Comment: 100 pages, including an Appendix, 21 + 7 figure

    A Hidden Markov Model for Copy Number Variant prediction from whole genome resequencing data

    Get PDF
    Motivation: Copy Number Variants (CNVs) are important genetic factors for studying human diseases. While high-throughput whole genome re-sequencing provides multiple lines of evidence for detecting CNVs, computational algorithms need to be tailored for different type or size of CNVs under different experimental designs. Results: To achieve optimal power and resolution of detecting CNVs at low depth of coverage, we implemented a Hidden Markov Model that integrates both depth of coverage and mate-pair relationship. The novelty of our algorithm is that we infer the likelihood of carrying a deletion jointly from multiple mate pairs in a region without the requirement of a single mate pairs being obvious outliers. By integrating all useful information in a comprehensive model, our method is able to detect medium-size deletions (200-2000bp) at low depth (<10× per sample). We applied the method to simulated data and demonstrate the power of detecting medium-size deletions is close to theoretical values. Availability: A program implemented in Java, Zinfandel, is available at http://www.cs.columbia.edu/~itsik/zinfandel

    Lower bounds on multiple sequence alignment using exact 3-way alignment

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Multiple sequence alignment is fundamental. Exponential growth in computation time appears to be inevitable when an optimal alignment is required for many sequences. Exact costs of optimum alignments are therefore rarely computed. Consequently much effort has been invested in algorithms for alignment that are heuristic, or explore a restricted class of solutions. These give an upper bound on the alignment cost, but it is equally important to determine the quality of the solution obtained. In the absence of an optimal alignment with which to compare, lower bounds may be calculated to assess the quality of the alignment. As more effort is invested in improving upper bounds (alignment algorithms), it is therefore important to improve lower bounds as well. Although numerous cost metrics can be used to determine the quality of an alignment, many are based on sum-of-pairs (SP) measures and their generalizations.</p> <p>Results</p> <p>Two standard and two new methods are considered for using exact 2-way and 3-way alignments to compute lower bounds on total SP alignment cost; one new method fares well with respect to accuracy, while the other reduces the computation time. The first employs exhaustive computation of exact 3-way alignments, while the second employs an efficient heuristic to compute a much smaller number of exact 3-way alignments. Calculating all 3-way alignments exactly and computing their average improves lower bounds on sum of SP cost in <it>v</it>-way alignments. However judicious selection of a subset of all 3-way alignments can yield a further improvement with minimal additional effort. On the other hand, a simple heuristic to select a random subset of 3-way alignments (a random packing) yields accuracy comparable to averaging all 3-way alignments with substantially less computational effort.</p> <p>Conclusion</p> <p>Calculation of lower bounds on SP cost (and thus the quality of an alignment) can be improved by employing a mixture of 3-way and 2-way alignments.</p

    Addressing challenges in the production and analysis of illumina sequencing data

    Get PDF
    Advances in DNA sequencing technologies have made it possible to generate large amounts of sequence data very rapidly and at substantially lower cost than capillary sequencing. These new technologies have specific characteristics and limitations that require either consideration during project design, or which must be addressed during data analysis. Specialist skills, both at the laboratory and the computational stages of project design and analysis, are crucial to the generation of high quality data from these new platforms. The Illumina sequencers (including the Genome Analyzers I/II/IIe/IIx and the new HiScan and HiSeq) represent a widely used platform providing parallel readout of several hundred million immobilized sequences using fluorescent-dye reversible-terminator chemistry. Sequencing library quality, sample handling, instrument settings and sequencing chemistry have a strong impact on sequencing run quality. The presence of adapter chimeras and adapter sequences at the end of short-insert molecules, as well as increased error rates and short read lengths complicate many computational analyses. We discuss here some of the factors that influence the frequency and severity of these problems and provide solutions for circumventing these. Further, we present a set of general principles for good analysis practice that enable problems with sequencing runs to be identified and dealt with

    WebCARMA: a web application for the functional and taxonomic classification of unassembled metagenomic reads

    Get PDF
    Gerlach W, Jünemann S, Tille F, Goesmann A, Stoye J. WebCARMA: a web application for the functional and taxonomic classification of unassembled metagenomic reads. BMC Bioinformatics. 2009;10(1):430.Background Metagenomics is a new field of research on natural microbial communities. High-throughput sequencing techniques like 454 or Solexa-Illumina promise new possibilities as they are able to produce huge amounts of data in much shorter time and with less efforts and costs than the traditional Sanger technique. But the data produced comes in even shorter reads (35-100 basepairs with Illumina, 100-500 basepairs with 454-sequencing). CARMA is a new software pipeline for the characterisation of species composition and the genetic potential of microbial samples using short, unassembled reads. Results In this paper, we introduce WebCARMA, a refined version of CARMA available as a web application for the taxonomic and functional classification of unassembled (ultra-)short reads from metagenomic communities. In addition, we have analysed the applicability of ultra-short reads in metagenomics. Conclusions We show that unassembled reads as short as 35 bp can be used for the taxonomic classification of a metagenome. The web application is freely available at http://webcarma.cebitec.uni-bielefeld.d

    LOCAS – A Low Coverage Assembly Tool for Resequencing Projects

    Get PDF
    Motivation: Next Generation Sequencing (NGS) is a frequently applied approach to detect sequence variations between highly related genomes. Recent large-scale re-sequencing studies as the Human 1000 Genomes Project utilize NGS data of low coverage to afford sequencing of hundreds of individuals. Here, SNPs and micro-indels can be detected by applying an alignment-consensus approach. However, computational methods capable of discovering other variations such as novel insertions or highly diverged sequence from low coverage NGS data are still lacking. Results: We present LOCAS, a new NGS assembler particularly designed for low coverage assembly of eukaryotic genomes using a mismatch sensitive overlap-layout-consensus approach. LOCAS assembles homologous regions in a homologyguided manner while it performs de novo assemblies of insertions and highly polymorphic target regions subsequently to an alignment-consensus approach. LOCAS has been evaluated in homology-guided assembly scenarios with low sequence coverage of Arabidopsis thaliana strains sequenced as part of the Arabidopsis 1001 Genomes Project. While assembling the same amount of long insertions as state-of-the-art NGS assemblers, LOCAS showed best results regarding contig size, error rate and runtime. Conclusion: LOCAS produces excellent results for homology-guided assembly of eukaryotic genomes with short reads and low sequencing depth, and therefore appears to be the assembly tool of choice for the detection of novel sequenc

    A New Calibrated Bayesian Internal Goodness-of-Fit Method: Sampled Posterior p-Values as Simple and General p-Values That Allow Double Use of the Data

    Get PDF
    Background: Recent approaches mixing frequentist principles with Bayesian inference propose internal goodness-of-fit (GOF) p-values that might be valuable for critical analysis of Bayesian statistical models. However, GOF p-values developed to date only have known probability distributions under restrictive conditions. As a result, no known GOF p-value has a known probability distribution for any discrepancy function. Methodology/Principal Findings: We show mathematically that a new GOF p-value, called the sampled posterior p-value (SPP), asymptotically has a uniform probability distribution whatever the discrepancy function. In a moderate finite sample context, simulations also showed that the SPP appears stable to relatively uninformative misspecifications of the prior distribution. Conclusions/Significance: These reasons, together with its numerical simplicity, make the SPP a better canonical GOF p-value than existing GOF p-values
    corecore