18 research outputs found

    An update on atrial fibrillation in 2014: From pathophysiology to treatment

    No full text
    Atrial fibrillation (AF) is the most frequently encountered cardiac arrhythmia. The trigger for initiation of AF is generally an enhanced vulnerability of pulmonary vein cardiomyocyte sleeves to either focal or re-entrant activity. The maintenance of AF is based on a "driver" mechanism in a vulnerable substrate. Cardiac mapping technology is providing further insight into these extremely dynamic processes. AF can lead to electrophysiological and structural remodelling, thereby promoting the condition. The management includes prevention of stroke by oral anticoagulation or left atrial appendage (LAA) occlusion, upstream therapy of concomitant conditions, and symptomatic improvement using rate control and/or rhythm control. Nonpharmacological strategies include electrical cardioversion and catheter ablation. There are substantial geographical variations in the management of AF, though European data indicate that 80% of patients receive adequate anticoagulation and 79% adequate rate control. High rates of morbidity and mortality weigh against perceived difficulties in management. Clinical research and growing experience are helping refine clinical indications and provide better technical approaches. Active research in cardiac electrophysiology is producing new antiarrhythmic agents that are reaching the experimental clinical arena, inhibiting novel ion channels. Future research should give better understanding of the underlying aetiology of AF and identification of drug targets, to help the move toward patient-specific therapy

    Critical Assessment of Metagenome Interpretation - the second round of challenges

    Get PDF
    Evaluating metagenomic software is key for optimizing metagenome interpretation and focus of the community-driven initiative for the Critical Assessment of Metagenome Interpretation (CAMI). In its second challenge, CAMI engaged the community to assess their methods on realistic and complex metagenomic datasets with long and short reads, created from ∼1,700 novel and known microbial genomes, as well as ∼600 novel plasmids and viruses. Altogether 5,002 results by 76 program versions were analyzed, representing a 22x increase in results. Substantial improvements were seen in metagenome assembly, some due to using long-read data. The presence of related strains still was challenging for assembly and genome binning, as was assembly quality for the latter. Taxon profilers demonstrated a marked maturation, with taxon profilers and binners excelling at higher bacterial taxonomic ranks, but underperforming for viruses and archaea. Assessment of clinical pathogen detection techniques revealed a need to improve reproducibility. Analysis of program runtimes and memory usage identified highly efficient programs, including some top performers with other metrics. The CAMI II results identify current challenges, but also guide researchers in selecting methods for specific analyses. Competing Interest Statement: A.E.D. co-founded Longas Technologies Pty Ltd, a company aimed at development of synthetic long-read sequencing technologies

    Assemblathon 2: evaluating de novo methods of genome assembly in three vertebrate species

    No full text
    BACKGROUND: The process of generating raw genome sequence data continues to become cheaper, faster, and more accurate. However, assembly of such data into high-quality, finished genome sequences remains challenging. Many genome assembly tools are available, but they differ greatly in terms of their performance (speed, scalability, hardware requirements, acceptance of newer read technologies) and in their final output (composition of assembled sequence). More importantly, it remains largely unclear how to best assess the quality of assembled genome sequences. The Assemblathon competitions are intended to assess current state-of-the-art methods in genome assembly. RESULTS: In Assemblathon 2, we provided a variety of sequence data to be assembled for three vertebrate species (a bird, a fish, and snake). This resulted in a total of 43 submitted assemblies from 21 participating teams. We evaluated these assemblies using a combination of optical map data, Fosmid sequences, and several statistical methods. From over 100 different metrics, we chose ten key measures by which to assess the overall quality of the assemblies. CONCLUSIONS: Many current genome assemblers produced useful assemblies, containing a significant representation of their genes and overall genome structure. However, the high degree of variability between the entries suggests that there is still much room for improvement in the field of genome assembly and that approaches which work well in assembling the genome of one species may not necessarily work well for another

    Critical Assessment of Metagenome Interpretation: the second round of challenges.

    Full text link
    Evaluating metagenomic software is key for optimizing metagenome interpretation and focus of the Initiative for the Critical Assessment of Metagenome Interpretation (CAMI). The CAMI II challenge engaged the community to assess methods on realistic and complex datasets with long- and short-read sequences, created computationally from around 1,700 new and known genomes, as well as 600 new plasmids and viruses. Here we analyze 5,002 results by 76 program versions. Substantial improvements were seen in assembly, some due to long-read data. Related strains still were challenging for assembly and genome recovery through binning, as was assembly quality for the latter. Profilers markedly matured, with taxon profilers and binners excelling at higher bacterial ranks, but underperforming for viruses and Archaea. Clinical pathogen detection results revealed a need to improve reproducibility. Runtime and memory usage analyses identified efficient programs, including top performers with other metrics. The results identify challenges and guide researchers in selecting methods for analyses
    corecore