926 research outputs found
PRS23 High Cost Cystic Fibrosis Patients as Identified in a Us Claims Database: A Closer Look at the Tail
PRS8 Predicted Survival for North American Patients with Cystic Fibrosis Adjusted for Cohort Specific Covariates
13 Genotype, disease severity, and healthcare resource use by patients with CF in the UK National Health Service
Randomized benchmarking of single and multi-qubit control in liquid-state NMR quantum information processing
Being able to quantify the level of coherent control in a proposed device
implementing a quantum information processor (QIP) is an important task for
both comparing different devices and assessing a device's prospects with
regards to achieving fault-tolerant quantum control. We implement in a
liquid-state nuclear magnetic resonance QIP the randomized benchmarking
protocol presented by Knill et al (PRA 77: 012307 (2008)). We report an error
per randomized pulse of with a
single qubit QIP and show an experimentally relevant error model where the
randomized benchmarking gives a signature fidelity decay which is not possible
to interpret as a single error per gate. We explore and experimentally
investigate multi-qubit extensions of this protocol and report an average error
rate for one and two qubit gates of for a three
qubit QIP. We estimate that these error rates are still not decoherence limited
and thus can be improved with modifications to the control hardware and
software.Comment: 10 pages, 6 figures, submitted versio
Recommended from our members
Systematic evaluation of spliced alignment programs for RNA-seq data
High-throughput RNA sequencing is an increasingly accessible method for studying gene structure and activity on a genome-wide scale. A critical step in RNA-seq data analysis is the alignment of partial transcript reads to a reference genome sequence. To assess the performance of current mapping software, we invited developers of RNA-seq aligners to process four large human and mouse RNA-seq data sets. In total, we compared 26 mapping protocols based on 11 programs and pipelines and found major performance differences between methods on numerous benchmarks, including alignment yield, basewise accuracy, mismatch and gap placement, exon junction discovery and suitability of alignments for transcript reconstruction. We observed concordant results on real and simulated RNA-seq data, confirming the relevance of the metrics employed. Future developments in RNA-seq alignment methods would benefit from improved placement of multimapped reads, balanced utilization of existing gene annotation and a reduced false discovery rate for splice junctions
Performance assessment of promoter predictions on ENCODE regions in the EGASP experiment
BACKGROUND: This study analyzes the predictions of a number of promoter predictors on the ENCODE regions of the human genome as part of the ENCODE Genome Annotation Assessment Project (EGASP). The systems analyzed operate on various principles and we assessed the effectiveness of different conceptual strategies used to correlate produced promoter predictions with the manually annotated 5' gene ends. RESULTS: The predictions were assessed relative to the manual HAVANA annotation of the 5' gene ends. These 5' gene ends were used as the estimated reference transcription start sites. With the maximum allowed distance for predictions of 1,000 nucleotides from the reference transcription start sites, the sensitivity of predictors was in the range 32% to 56%, while the positive predictive value was in the range 79% to 93%. The average distance mismatch of predictions from the reference transcription start sites was in the range 259 to 305 nucleotides. At the same time, using transcription start site estimates from DBTSS and H-Invitational databases as promoter predictions, we obtained a sensitivity of 58%, a positive predictive value of 92%, and an average distance from the annotated transcription start sites of 117 nucleotides. In this experiment, the best performing promoter predictors were those that combined promoter prediction with gene prediction. The main reason for this is the reduced promoter search space that resulted in smaller numbers of false positive predictions. CONCLUSION: The main finding, now supported by comprehensive data, is that the accuracy of human promoter predictors for high-throughput annotation purposes can be significantly improved if promoter prediction is combined with gene prediction. Based on the lessons learned in this experiment, we propose a framework for the preparation of the next similar promoter prediction assessment
Quantum state merging and negative information
We consider a quantum state shared between many distant locations, and define
a quantum information processing primitive, state merging, that optimally
merges the state into one location. As announced in [Horodecki, Oppenheim,
Winter, Nature 436, 673 (2005)], the optimal entanglement cost of this task is
the conditional entropy if classical communication is free. Since this quantity
can be negative, and the state merging rate measures partial quantum
information, we find that quantum information can be negative. The classical
communication rate also has a minimum rate: a certain quantum mutual
information. State merging enabled one to solve a number of open problems:
distributed quantum data compression, quantum coding with side information at
the decoder and sender, multi-party entanglement of assistance, and the
capacity of the quantum multiple access channel. It also provides an
operational proof of strong subadditivity. Here, we give precise definitions
and prove these results rigorously.Comment: 23 pages, 3 figure
Tema Con Variazioni: Quantum Channel Capacity
Channel capacity describes the size of the nearly ideal channels, which can
be obtained from many uses of a given channel, using an optimal error
correcting code. In this paper we collect and compare minor and major
variations in the mathematically precise statements of this idea which have
been put forward in the literature. We show that all the variations considered
lead to equivalent capacity definitions. In particular, it makes no difference
whether one requires mean or maximal errors to go to zero, and it makes no
difference whether errors are required to vanish for any sequence of block
sizes compatible with the rate, or only for one infinite sequence.Comment: 32 pages, uses iopart.cl
- …