318,766 research outputs found

    Outcomes in Trials for Management of Caries Lesions (OuTMaC):protocol

    Get PDF
    Background Clinical trials on caries lesion management use an abundance of outcomes, hampering comparison or combination of different study results and their efficient translation into clinical practice. Core outcome sets are an agreed standardized collection of outcomes which should be measured and reported in all trials for a specific clinical area. We aim to develop a core outcome set for trials investigating management of caries lesions in primary or permanent teeth conducted in primary or secondary care encompassing all stages of disease. Methods To identify existing outcomes, trials on prevention and trials on management of caries lesions will be screened systematically in four databases. Screening, extraction and deduplication will be performed by two researchers until consensus is reached. The definition of the core outcome set will by based on an e-Delhi consensus process involving key stakeholders namely patients, dentists, clinical researchers, health economists, statisticians, policy-makers and industry representatives. For the first stage of the Delphi process, a patient panel and a separate panel consisting of researchers, clinicians, teachers, industry affiliated researchers, policy-makers, and other interested parties will be held. An inclusive approach will be taken to involve panelists from a wide variety of socio-economic and geographic backgrounds. Results from the first round will be summarized and fed back to individuals for the second round, where panels will be combined and allowed to modify their scoring in light of the full panel’s opinion. Necessity for a third round will be dependent on the outcome of the first two. Agreement will be measured via defined consensus rules; up to a maximum of seven outcomes. If resources allow, we will investigate features that influence decision making for different groups. Discussion By using an explicit, transparent and inclusive multi-step consensus process, the planned core outcome set should be justifiable, relevant and comprehensive. The dissemination and application of this core outcome set should improve clinical trials on managing caries lesions and allow comparison, synthesis and implementation of scientific data. Trial registration Registered 12 April 2015 at COMET (http://www.comet-initiative.org

    From cheek swabs to consensus sequences : an A to Z protocol for high-throughput DNA sequencing of complete human mitochondrial genomes

    Get PDF
    Background: Next-generation DNA sequencing (NGS) technologies have made huge impacts in many fields of biological research, but especially in evolutionary biology. One area where NGS has shown potential is for high-throughput sequencing of complete mtDNA genomes (of humans and other animals). Despite the increasing use of NGS technologies and a better appreciation of their importance in answering biological questions, there remain significant obstacles to the successful implementation of NGS-based projects, especially for new users. Results: Here we present an ‘A to Z’ protocol for obtaining complete human mitochondrial (mtDNA) genomes – from DNA extraction to consensus sequence. Although designed for use on humans, this protocol could also be used to sequence small, organellar genomes from other species, and also nuclear loci. This protocol includes DNA extraction, PCR amplification, fragmentation of PCR products, barcoding of fragments, sequencing using the 454 GS FLX platform, and a complete bioinformatics pipeline (primer removal, reference-based mapping, output of coverage plots and SNP calling). Conclusions: All steps in this protocol are designed to be straightforward to implement, especially for researchers who are undertaking next-generation sequencing for the first time. The molecular steps are scalable to large numbers (hundreds) of individuals and all steps post-DNA extraction can be carried out in 96-well plate format. Also, the protocol has been assembled so that individual ‘modules’ can be swapped out to suit available resources

    International consensus on natural orifice specimen extraction surgery (NOSES) for colorectal cancer

    Get PDF
    In recent years, natural orifice specimen extraction surgery (NOSES) in the treatment of colorectal cancer has attracted widespread attention. The potential benefits of NOSES including reduction in postoperative pain and wound complications, less use of postoperative analgesic, faster recovery of bowel function, shorter length of hospital stay, better cosmetic and psychological effect have been described in colorectal surgery. Despite significant decrease in surgical trauma of NOSES have been observed, the potential pitfalls of this technique have been demonstrated. Particularly, several issues including bacteriological concerns, oncological outcomes and patient selection are raised with this new technique. Therefore, it is urgent and necessary to reach a consensus as an industry guideline to standardize the implementation of NOSES in colorectal surgery. After three rounds of discussion by all members of the International Alliance of NOSES, the consensus is finally completed, which is also of great significance to the long-term progress of NOSES worldwide.info:eu-repo/semantics/publishedVersio

    Obtaining the consensus of multiple correspondences between graphs through online learning.

    Get PDF
    In structural pattern recognition, it is usual to compare a pair of objects through the generation of a correspondence between the elements of each of their local parts. To do so, one of the most natural ways to represent these objects is through attributed graphs. Several existing graph extraction methods could be implemented and thus, numerous graphs, which may not only differ in their nodes and edge structure but also in their attribute domains, could be created from the same object. Afterwards, a matching process is implemented to generate the correspondence between two attributed graphs, and depending on the selected graph matching method, a unique correspondence is generated from a given pair of attributed graphs. The combination of these factors leads to the possibility of a large quantity of correspondences between the two original objects. This paper presents a method that tackles this problem by considering multiple correspondences to conform a single one called a consensus correspondence, eliminating both the incongruences introduced by the graph extraction and the graph matching processes. Additionally, through the application of an online learning algorithm, it is possible to deduce some weights that influence on the generation of the consensus correspondence. This means that the algorithm automatically learns the quality of both the attribute domain and the correspondence for every initial correspondence proposal to be considered in the consensus, and defines a set of weights based on this quality. It is shown that the method automatically tends to assign larger values to high quality initial proposals, and therefore is capable to deduce better consensus correspondences

    Measuring Accuracy of Triples in Knowledge Graphs

    Get PDF
    An increasing amount of large-scale knowledge graphs have been constructed in recent years. Those graphs are often created from text-based extraction, which could be very noisy. So far, cleaning knowledge graphs are often carried out by human experts and thus very inefficient. It is necessary to explore automatic methods for identifying and eliminating erroneous information. In order to achieve this, previous approaches primarily rely on internal information i.e. the knowledge graph itself. In this paper, we introduce an automatic approach, Triples Accuracy Assessment (TAA), for validating RDF triples (source triples) in a knowledge graph by finding consensus of matched triples (among target triples) from other knowledge graphs. TAA uses knowledge graph interlinks to find identical resources and apply different matching methods between the predicates of source triples and target triples. Then based on the matched triples, TAA calculates a confidence score to indicate the correctness of a source triple. In addition, we present an evaluation of our approach using the FactBench dataset for fact validation. Our findings show promising results for distinguishing between correct and wrong triples

    Optimization of miRNA-seq data preprocessing.

    Get PDF
    The past two decades of microRNA (miRNA) research has solidified the role of these small non-coding RNAs as key regulators of many biological processes and promising biomarkers for disease. The concurrent development in high-throughput profiling technology has further advanced our understanding of the impact of their dysregulation on a global scale. Currently, next-generation sequencing is the platform of choice for the discovery and quantification of miRNAs. Despite this, there is no clear consensus on how the data should be preprocessed before conducting downstream analyses. Often overlooked, data preprocessing is an essential step in data analysis: the presence of unreliable features and noise can affect the conclusions drawn from downstream analyses. Using a spike-in dilution study, we evaluated the effects of several general-purpose aligners (BWA, Bowtie, Bowtie 2 and Novoalign), and normalization methods (counts-per-million, total count scaling, upper quartile scaling, Trimmed Mean of M, DESeq, linear regression, cyclic loess and quantile) with respect to the final miRNA count data distribution, variance, bias and accuracy of differential expression analysis. We make practical recommendations on the optimal preprocessing methods for the extraction and interpretation of miRNA count data from small RNA-sequencing experiments

    Algorithms For Extracting Timeliness Graphs

    Get PDF
    We consider asynchronous message-passing systems in which some links are timely and processes may crash. Each run defines a timeliness graph among correct processes: (p; q) is an edge of the timeliness graph if the link from p to q is timely (that is, there is bound on communication delays from p to q). The main goal of this paper is to approximate this timeliness graph by graphs having some properties (such as being trees, rings, ...). Given a family S of graphs, for runs such that the timeliness graph contains at least one graph in S then using an extraction algorithm, each correct process has to converge to the same graph in S that is, in a precise sense, an approximation of the timeliness graph of the run. For example, if the timeliness graph contains a ring, then using an extraction algorithm, all correct processes eventually converge to the same ring and in this ring all nodes will be correct processes and all links will be timely. We first present a general extraction algorithm and then a more specific extraction algorithm that is communication efficient (i.e., eventually all the messages of the extraction algorithm use only links of the extracted graph)
    • …
    corecore