1,838 research outputs found

    Sampling-Based Query Re-Optimization

    Full text link
    Despite of decades of work, query optimizers still make mistakes on "difficult" queries because of bad cardinality estimates, often due to the interaction of multiple predicates and correlations in the data. In this paper, we propose a low-cost post-processing step that can take a plan produced by the optimizer, detect when it is likely to have made such a mistake, and take steps to fix it. Specifically, our solution is a sampling-based iterative procedure that requires almost no changes to the original query optimizer or query evaluation mechanism of the system. We show that this indeed imposes low overhead and catches cases where three widely used optimizers (PostgreSQL and two commercial systems) make large errors.Comment: This is the extended version of a paper with the same title and authors that appears in the Proceedings of the ACM SIGMOD International Conference on Management of Data (SIGMOD 2016

    Ultra-Low-Power Superconductor Logic

    Full text link
    We have developed a new superconducting digital technology, Reciprocal Quantum Logic, that uses AC power carried on a transmission line, which also serves as a clock. Using simple experiments we have demonstrated zero static power dissipation, thermally limited dynamic power dissipation, high clock stability, high operating margins and low BER. These features indicate that the technology is scalable to far more complex circuits at a significant level of integration. On the system level, Reciprocal Quantum Logic combines the high speed and low-power signal levels of Single-Flux- Quantum signals with the design methodology of CMOS, including low static power dissipation, low latency combinational logic, and efficient device count.Comment: 7 pages, 5 figure

    The influence of age on tooth supported fixed prosthetic restoration longevity. A systematic review

    Get PDF
    OBJECTIVE: The purpose of this study was to investigate the possible influence of age on the longevity of tooth supported fixed prosthetic restorations, using a systematic review process. DATA SOURCES: To identify relevant papers an electronic search was made using various databases (MEDLINE via Pubmed, EMBASE, The Cochrane Register of RCTs, the database of abstracts of Reviews of Effects-DARE), augmented by hand searching of key prosthodontic journals (International Journal of Prosthodontics, Journal of Prosthetic Dentistry and Journal of Prosthodontics) and reference cross-check. STUDY SELECTION: Assessment and selection of studies identified were conducted in a two phase procedure, by two independent reviewers utilizing specific inclusion and exclusion criteria. The minimum mean follow-up time was set at 5 years. RESULTS: The initial database search yielded 513 relevant titles. After the subsequent filtering process, 22 articles were selected for full-text analysis, finally resulting in 11 studies that met the inclusion criteria. All studies were classified as category C according to the strength of evidence. Meta-analysis was not possible due to the non-uniformity of the data available. The final studies were presented with conflicting results. The majority of the final studies did not report a statistically significant effect of age on fixed prostheses survival, whilst only one study reported poorer prognosis for elderly patients, and two studies reported poorer prognosis for middle-aged patients. CONCLUSIONS: The results of this systematic review showed that increased age of patients should not be considered as a risk factor for the survival of fixed prostheses. Although the majority of studies did not show any effect of age on the survival of fixed prostheses, there was some evidence that middle-aged patients may present with higher failure rates

    Avoiding the pitfalls of gene set enrichment analysis with SetRank.

    Get PDF
    The purpose of gene set enrichment analysis (GSEA) is to find general trends in the huge lists of genes or proteins generated by many functional genomics techniques and bioinformatics analyses. Here we present SetRank, an advanced GSEA algorithm which is able to eliminate many false positive hits. The key principle of the algorithm is that it discards gene sets that have initially been flagged as significant, if their significance is only due to the overlap with another gene set. The algorithm is explained in detail and its performance is compared to that of other methods using objective benchmarking criteria. Furthermore, we explore how sample source bias can affect the results of a GSEA analysis. The benchmarking results show that SetRank is a highly specific tool for GSEA. Furthermore, we show that the reliability of results can be improved by taking sample source bias into account. SetRank is available as an R package and through an online web interface

    Coev-web: a web platform designed to simulate and evaluate coevolving positions along a phylogenetic tree.

    Get PDF
    BACKGROUND: Available methods to simulate nucleotide or amino acid data typically use Markov models to simulate each position independently. These approaches are not appropriate to assess the performance of combinatorial and probabilistic methods that look for coevolving positions in nucleotide or amino acid sequences. RESULTS: We have developed a web-based platform that gives a user-friendly access to two phylogenetic-based methods implementing the Coev model: the evaluation of coevolving scores and the simulation of coevolving positions. We have also extended the capabilities of the Coev model to allow for the generalization of the alphabet used in the Markov model, which can now analyse both nucleotide and amino acid data sets. The simulation of coevolving positions is novel and builds upon the developments of the Coev model. It allows user to simulate pairs of dependent nucleotide or amino acid positions. CONCLUSIONS: The main focus of our paper is the new simulation method we present for coevolving positions. The implementation of this method is embedded within the web platform Coev-web that is freely accessible at http://coev.vital-it.ch/, and was tested in most modern web browsers

    Measuring co-authorship and networking-adjusted scientific impact

    Get PDF
    Appraisal of the scientific impact of researchers, teams and institutions with productivity and citation metrics has major repercussions. Funding and promotion of individuals and survival of teams and institutions depend on publications and citations. In this competitive environment, the number of authors per paper is increasing and apparently some co-authors don't satisfy authorship criteria. Listing of individual contributions is still sporadic and also open to manipulation. Metrics are needed to measure the networking intensity for a single scientist or group of scientists accounting for patterns of co-authorship. Here, I define I1 for a single scientist as the number of authors who appear in at least I1 papers of the specific scientist. For a group of scientists or institution, In is defined as the number of authors who appear in at least In papers that bear the affiliation of the group or institution. I1 depends on the number of papers authored Np. The power exponent R of the relationship between I1 and Np categorizes scientists as solitary (R>2.5), nuclear (R=2.25-2.5), networked (R=2-2.25), extensively networked (R=1.75-2) or collaborators (R<1.75). R may be used to adjust for co-authorship networking the citation impact of a scientist. In similarly provides a simple measure of the effective networking size to adjust the citation impact of groups or institutions. Empirical data are provided for single scientists and institutions for the proposed metrics. Cautious adoption of adjustments for co-authorship and networking in scientific appraisals may offer incentives for more accountable co-authorship behaviour in published articles.Comment: 25 pages, 5 figure

    Resource Allocation for Multiple Concurrent In-Network Stream-Processing Applications

    Get PDF
    This paper investigates the operator mapping problem for in-network stream-processing applications. In-network stream-processing amounts to applying one or more trees of operators in steady-state, to multiple data objects that are continuously updated at different locations in the network. The goal is to compute some final data at some desired rate. Different operator trees may share common subtrees. Therefore, it may be possible to reuse some intermediate results in different application trees. The first contribution of this work is to provide complexity results for different instances of the basic problem, as well as integer linear program formulations of various problem instances. The second second contribution is the design of several polynomial-time heuristics. One of the primary objectives of these heuristics is to reuse intermediate results shared by multiple applications. Our quantitative comparisons of these heuristics in simulation demonstrates the importance of choosing appropriate processors for operator mapping. It also allow us to identify a heuristic that achieves good results in practice
    corecore