30 research outputs found

    Semantically annotated hypermedia services

    Full text link
    Hypermedia systems’ researchers investigate the various approaches in the way documents and resources are linked, navigated and stored in a distributed environment. Unfortunately, those systems fail to provide effortlessly usable discrete services, since it is difficult both to discover and to invoke any of them. This paper proposes the usage of emerging technologies that try to augment the Web resources with semantics in order to provide Hypermedia services that can be easily discovered, and integrated by potential third party developers. In this context, we analyze the benefits for the Hypermedia community upon the adoption of Semantic Web technologies for the description of Hypermedia services, and we implement an initial corresponding ontology

    Data-Oriented Transaction Execution

    Get PDF
    While hardware technology has undergone major advancements over the past decade, transaction processing systems have remained largely unchanged. The number of cores on a chip grows exponentially, following Moore's Law, allowing for an ever-increasing number of transactions to execute in parallel. As the number of concurrently-executing transactions increases, contended critical sections become scalability burdens. In typical transaction processing systems the centralized lock manager is often the first contended component and scalability bottleneck. In this paper, we identify the conventional thread-to-transaction assignment policy as the primary cause of contention. Then, we design DORA, a system that decomposes each transaction to smaller actions and assigns actions to threads based on which data each action is about to access. This allows each thread to mostly access thread-local data structures, minimizing interaction with the contention-prone centralized lock manager. Built on top of a conventional storage engine, DORA's design maintains all the ACID properties. Evaluation of a prototype implementation of DORA on a multicore system demonstrates that DORA attains up to 4.6x higher throughput than the state-of-the-art storage engine when running a variety of OLTP workloads, such TPC-C, TPC-B, and Nokia’s TM1

    Shore-MT: A Scalable Storage Manager for the Multicore Era

    Get PDF
    Database storage managers have long been able to efficiently handle multiple concurrent requests. Until recently, however, a computer contained only a few single-core CPUs, and therefore only a few transactions could simultaneously access the storage manager's internal structures. This allowed storage managers to use non-scalable approaches without any penalty. With the arrival of multicore chips, however, this situation is rapidly changing. More and more threads can run in parallel, stressing the internal scalability of the storage manager. Systems optimized for high performance at a limited number of cores are not assured similarly high performance at a higher core count, because unanticipated scalability obstacles arise. We benchmark four popular open-source storage managers (Shore, BerkeleyDB, MySQL, and PostgreSQL) on a modern multicore machine, and find that they all suffer in terms of scalability. We briefly examine the bottlenecks in the various storage engines. We then present Shore-MT, a multithreaded and highly scalable version of Shore which we developed by identifying and successively removing internal bottlenecks. When compared to other DBMS, Shore-MT exhibits superior scalability and 2--4 times higher absolute throughput than its peers. We also show that designers should favor scalability to single-thread performance, and highlight important principles for writing scalable storage engines, illustrated with real examples from the development of Shore-MT

    Database Servers on Chip Multiprocessors: Limitations and Opportunities

    Get PDF
    Prior research shows that database system performance is dominated by off-chip data stalls, resulting in a concerted effort to bring data into on-chip caches. At the same time, high levels of integration have enabled the advent of chip multiprocessors and increasingly large (and slow) on-chip caches. These two trends pose the imminent technical and research challenge of adapting high-performance data management software to a shifting hardware landscape. In this paper we characterize the performance of a commercial database server running on emerging chip multiprocessor technologies. We find that the major bottleneck of current software is data cache stalls, with L2 hit stalls rising from oblivion to become the dominant execution time component in some cases. We analyze the source of this shift and derive a list of features for future database designs to attain maximum performance

    An Analysis of Database System Performance on Chip Multiprocessors

    Get PDF
    Prior research shows that database system performance is dominated by off-chip data stalls, resulting in a concerted effort to bring data into on-chip caches. At the same time, high levels of integration have enabled the advent of chip multiprocessors and increasingly large (and slow) on-chip caches. These two trends pose the imminent technical and research challenge of adapting high-performance data management software to a shifting hardware landscape. In this paper we characterize the performance of a commercial database server running on emerging chip multiprocessor technologies. We find that the major bottleneck of current software is data cache stalls, with L2 hit stalls rising from oblivion to become the dominant execution time component in some cases. We analyze the source of this shift and derive a list of features for future database designs to attain maximum performance. Towards this direction, we propose the adoption of staged database system designs to achieve high performance on chip multiprocessors. We present the basic principles of staged databases and an initial implementation of such a system, called Cordoba

    Comparative efficacy of materials used in patients undergoing pulpotomy or direct pulp capping in carious teeth: A systematic review and meta-analysis.

    Get PDF
    OBJECTIVES Different materials have been used for capping the pulp after exposure during caries removal in permanent teeth. The purpose of this study was to collate and analyze all pertinent evidence from randomized controlled trials (RCTs) on different materials used in patients undergoing pulpotomy or direct pulp capping in carious teeth. MATERIALS AND METHODS Trials comparing two or more capping agents used for direct pulp capping (DPC) or pulpotomy were considered eligible. An electronic search of four databases and two clinical trial registries was carried out up to February 28, 2021 using a search strategy properly adapted to the PICO framework. Screening, data extraction, and risk of bias (RoB) assessment of primary studies were performed in duplicate and independently. The primary outcome was clinical and radiological success; secondary outcomes included continued root formation, tooth discoloration, and dentin bridge formation. RESULTS 21 RCTs were included in the study. The RoB assessment indicated a moderate risk among the studies. Due to significant clinical and statistical heterogeneity among the studies, performing network meta-analysis (NMA) was not possible. An ad hoc subgroup analysis revealed strong evidence of a higher success of DPC with Mineral Trioxide Aggregate (MTA) compared to calcium hydroxide (CH) (odds ratio [OR] = 3.10, 95% confidence interval [CI]: 1.66-5.79). MTA performed better than CH in pulp capping (both DPC and pulpotomy) of mature compared to immature teeth (OR = 3.34, 95% CI: 1.81-6.17). The GRADE assessment revealed moderate strength of evidence for DPC and mature teeth, and low to very low strength of evidence for the remaining subgroups. CONCLUSIONS Considerable clinical and statistical heterogeneity among the trials did not allow NMA. The ad hoc subgroup analysis indicated that the clinical and radiographic success of MTA was higher than that of CH but only in mature teeth and DPC cases where the strength of evidence was moderate. PROSPERO Registration: number CRD42020127239

    To Share or Not To Share?

    Get PDF
    Intuitively, aggressive work sharing among concurrent queries in a database system should always improve performance by eliminating redundant computation or data accesses. We show that, contrary to common intuition, this is not always the case in practice, especially in the highly parallel world of chip multiprocessors. As the number of cores in the system increases, a trade-off appears between exploiting work sharing opportunities and the available parallelism. To resolve the trade-off, we develop an analytical approach that predicts the effect of work sharing in multi-core systems. Database systems can use the model to determine, statically or at runtime, whether work sharing is beneficial and apply it only when appropriate. The contributions of this paper are as follows. First, we introduce and analyze the effects of the trade-off between work sharing and parallelism on database systems running complex decision-support queries. Second, we propose an intuitive and simple model that can evaluate the trade-off using real-world measurement approximations of the query execution processes. Furthermore, we integrate the model into a prototype database execution engine, and demonstrate that selective work sharing according to the model outperforms never-share static schemes by 20% on average and always-share ones by 2.5x

    Chromosomal aberrations in breast cancer: a comparison between cytogenetics and comparative genomic hybridization

    Full text link
    The analysis of chromosomal imbalances in solid tumors using comparative genetic hybridization (CGH) has gained much attention. A survey of the literature suggests that CGH is more sensitive in detecting copy number aberrations than is karyotyping, although careful comparisons between CGH and cytogenetics have not been performed. Here, we compared cytogenetics and CGH in 29 invasive breast cancers after converting the karyotypes into net copy number gains and losses. We found 15 tumors (56%) with a significant agreement between the two methods and 12 tumors (44%) where the methods were in disagreement (two cases failed CGH analysis). Interestingly, in 13 of the 15 tumors where the two methods were concordant, there was also a strong correlation between chromosome index and DNA index by flow cytometry. In the opposite situation, i.e., when chromosome and DNA indices were not matching, there was disagreement between cytogenetics and CGH in 10 of the 12 tumors. Of the discordant cases, all except one had a "simple" abnormal karyotype. Unresolved chromosomal aberrations (marker chromosomes, homogeneously staining regions, double minutes) could not completely explain the differences between CGH and karyotyping. A likely explanation for the discrepancies is that the methods analyzed different cell populations. Gains and losses found by CGH represented the predominant (often aneuploid) clone, whereas the abnormal, near-diploid karyotypes represented minor cell clone(s), which, for unknown reasons, had a growth advantage in vitro

    Comparative cytogenetic and DNA flow cytometric analysis of 242 primary breast carcinomas

    Full text link
    The cytogenetic and DNA flow cytometric findings in 242 breast carcinomas were compared. The combined use of both techniques improved the detection of abnormal cell populations from 65% by cytogenetic analysis alone and 59% by DNA flow cytometric analysis alone to 84%. Informative and comparable cytogenetic and flow cytometric data were obtained for 155 tumors. Among these 155 tumors, there was good concordance (64%) between the estimates of genomic changes by the two methods. Most discrepancies were among the DNA-diploid cases, where cytogenetic analysis detected small genomic changes. There were, however, also some exceptions in which large genomic changes detected by one method were missed by the other. Of the specific breast cancer-associated cytogenetic aberrations subjected to separate correlation analysis, polysomy for chromosome 20 was significantly associated with a high S-phase fraction, whereas loss of the long arm of chromosome 16 and/or the presence of a der(1;16) were significantly associated with a low S-phase fraction. Our data show that cytogenetic and DNA flow cytometric analyses of breast carcinomas give largely comparable results, and that combining data from both methods significantly improves the information obtained by either technique used alone on the genetic abnormalities in these tumors
    corecore