10 research outputs found

    Data and performances evaluation of the SPIDIA-DNA Pan-European External Quality Assessment: 2nd SPIDIA-DNA laboratory report.

    Get PDF
    AbstractWithin the EU-SPIDIA project (www.spidia.eu), the quality parameters of blood genomic DNA were defined [SPIDIA-DNA: an External Quality Assessment for the pre-analytical phase of blood samples used for DNA-based analyses – [1]; Influence of pre-analytical procedures on genomic DNA integrity in blood samples: the SPIDIA experience – [2]; Combining qualitative and quantitative imaging evaluation for the assessment of genomic DNA integrity: the SPIDIA experience – [3]. DNA quality parameters were used to evaluate the laboratory performance within an External Quality Assessment (EQA) [Second SPIDIA-DNA External Quality Assessment (EQA): Influence of pre-analytical phase of blood samples on genomic DNA quality – [4]. These parameters included DNA purity and yield by UV spectrophotometric measurements, the presence of PCR interferences by Kineret software and genomic DNA integrity analysis by Pulsed Field Gel Electrophoresis.Here we present the specific laboratory report of the 2nd SPIDIA-DNA EQA as an example of data and performances evaluation

    An Empirical Study of Bots in Software Development -- Characteristics and Challenges from a Practitioner's Perspective

    Full text link
    Software engineering bots - automated tools that handle tedious tasks - are increasingly used by industrial and open source projects to improve developer productivity. Current research in this area is held back by a lack of consensus of what software engineering bots (DevBots) actually are, what characteristics distinguish them from other tools, and what benefits and challenges are associated with DevBot usage. In this paper we report on a mixed-method empirical study of DevBot usage in industrial practice. We report on findings from interviewing 21 and surveying a total of 111 developers. We identify three different personas among DevBot users (focusing on autonomy, chat interfaces, and "smartness"), each with different definitions of what a DevBot is, why developers use them, and what they struggle with. We conclude that future DevBot research should situate their work within our framework, to clearly identify what type of bot the work targets, and what advantages practitioners can expect. Further, we find that there currently is a lack of general purpose "smart" bots that go beyond simple automation tools or chat interfaces. This is problematic, as we have seen that such bots, if available, can have a transformative effect on the projects that use them.Comment: To be published at the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE

    A New Technology for Stabilization of Biomolecules in Tissues for Combined Histological and Molecular Analyses

    No full text
    For accurate diagnosis, prediction of outcome, and selection of appropriate therapies, the molecular characterization of human diseases requires analysis of a broad spectrum of altered biomolecules, in addition to morphological features, in affected tissues such as tumors. In a high-throughput screening approach, we have developed the PAXgene Tissue System as a novel tissue stabilization technology. Comprehensive characterization of this technology in stabilized and paraffin-embedded human tissues and comparison with snap-frozen tissues revealed excellent preservation of morphology and antigenicity, as well as outstanding Integrity of nucleic acids (genomic DNA, miRNA, and mRNA) and phosphoproteins. Importantly, PAXgene-fixed, paraffin-embedded tissues provided RNA quantity and quality not only significantly better than that obtained with neutral buffered formalin, but also similar to that from snap-frozen tissue, which currently represents the gold standard for molecular analyses. The PAXgene tissue stabilization system thus opens new opportunities in a variety of molecular diagnostic and research applications in which the collection of snap-frozen tissue is not feasible for medical, logistic, or ethical reasons. Furthermore, this technology allows performing histopathological analyses together with molecular studies in a single sample, which markedly facilitates direct correlation of morphological disease phenotypes with alterations of nucleic acids and other biomolecules. (J Mol Diagn 2012, 14:458-466. http://dx.doi.org/10.1016/j.jmoldx.2012.05.002

    A fine-grained data set and analysis of tangling in bug fixing commits

    No full text
    Abstract Context: Tangled commits are changes to software that address multiple concerns at once. For researchers interested in bugs, tangled commits mean that they actually study not only bugs, but also other concerns irrelevant for the study of bugs. Objectives: We want to improve our understanding of the prevalence of tangling and the types of changes that are tangled within bug fixing commits. Methods: We use a crowd sourcing approach for manual labeling to validate which changes contribute to bug fixes for each line in bug fixing commits. Each line is labeled by four participants. If at least three participants agree on the same label, we have consensus. Results: We estimate that between 17% and 32% of all changes in bug fixing commits modify the source code to fix the underlying problem. However, when we only consider changes to the production code files this ratio increases to 66% to 87%. We find that about 11% of lines are hard to label leading to active disagreements between participants. Due to confirmed tangling and the uncertainty in our data, we estimate that 3% to 47% of data is noisy without manual untangling, depending on the use case. Conclusions: Tangled commits have a high prevalence in bug fixes and can lead to a large amount of noise in the data. Prior research indicates that this noise may alter results. As researchers, we should be skeptics and assume that unvalidated data is likely very noisy, until proven otherwise
    corecore