54,858 research outputs found

    Methods for Joint Normalization and Comparison of Hi-C data

    Get PDF
    The development of chromatin conformation capture technology has opened new avenues of study into the 3D structure and function of the genome. Chromatin structure is known to influence gene regulation, and differences in structure are now emerging as a mechanism of regulation between, e.g., cell differentiation and disease vs. normal states. Hi-C sequencing technology now provides a way to study the 3D interactions of the chromatin over the whole genome. However, like all sequencing technologies, Hi-C suffers from several forms of bias stemming from both the technology and the DNA sequence itself. Several normalization methods have been developed for normalizing individual Hi-C datasets, but little work has been done on developing joint normalization methods for comparing two or more Hi-C datasets. To make full use of Hi-C data, joint normalization and statistical comparison techniques are needed to carry out experiments to identify regions where chromatin structure differs between conditions. We develop methods for the joint normalization and comparison of two Hi-C datasets, which we then extended to more complex experimental designs. Our normalization method is novel in that it makes use of the distance-dependent nature of chromatin interactions. Our modification of the Minus vs. Average (MA) plot to the Minus vs. Distance (MD) plot allows for a nonparametric data-driven normalization technique using loess smoothing. Additionally, we present a simple statistical method using Z-scores for detecting differentially interacting regions between two datasets. Our initial method was published as the Bioconductor R package HiCcompare [http://bioconductor.org/packages/HiCcompare/](http://bioconductor.org/packages/HiCcompare/). We then further extended our normalization and comparison method for use in complex Hi-C experiments with more than two datasets and optional covariates. We extended the normalization method to jointly normalize any number of Hi-C datasets by using a cyclic loess procedure on the MD plot. The cyclic loess normalization technique can remove between dataset biases efficiently and effectively even when several datasets are analyzed at one time. Our comparison method implements a generalized linear model-based approach for comparing complex Hi-C experiments, which may have more than two groups and additional covariates. The extended methods are also available as a Bioconductor R package [http://bioconductor.org/packages/multiHiCcompare/](http://bioconductor.org/packages/multiHiCcompare/). Finally, we demonstrate the use of HiCcompare and multiHiCcompare in several test cases on real data in addition to comparing them to other similar methods (https://doi.org/10.1002/cpbi.76)

    Evaluating Legacy System Migration Technologies through Empirical Studies

    Get PDF
    We present two controlled experiments conducted with master students and practitioners and a case study conducted with practitioners to evaluate the use of MELIS (Migration Environment for Legacy Information Systems) for the migration of legacy COBOL programs to the web. MELIS has been developed as an Eclipse plug-in within a technology transfer project conducted with a small software company [16]. The partner company has developed and marketed in the last 30 years several COBOL systems that need to be migrated to the web, due to the increasing requests of the customers. The goal of the technology transfer project was to define a systematic migration strategy and the supporting tools to migrate these COBOL systems to the web and make the partner company an owner of the developed technology. The goal of the controlled experiments and case study was to evaluate the effectiveness of introducing MELIS in the partner company and compare it with traditional software development environments. The results of the overall experimentation show that the use of MELIS increases the productivity and reduces the gap between novice and expert software engineers

    Get the gist? The effects of processing depth on false recognition in short-term and long-term memory

    Get PDF
    Gist-based processing has been proposed to account for robust false memories in the converging-associates task. The deep-encoding processes known to enhance verbatim memory also strengthen gist memory and increase distortions of long-term memory (LTM). Recent research has demonstrated that compelling false memory illusions are relatively delay-invariant, also occurring under canonical short-term memory (STM) conditions. To investigate the contributions of gist to false memory at short and long delays, processing depth was manipulated as participants encoded lists of four semantically related words and were probed immediately, following a filled 3- to 4-s retention interval, or approximately 20 min later, in a surprise recognition test. In two experiments, the encoding manipulation dissociated STM and LTM on the frequency, but not the phenomenology, of false memory. Deep encoding at STM increases false recognition rates at LTM, but confidence ratings and remember/know judgments are similar across delays and do not differ as a function of processing depth. These results suggest that some shared and some unique processes underlie false memory illusions at short and long delays

    The effects of change decomposition on code review -- a controlled experiment

    Get PDF
    Background: Code review is a cognitively demanding and time-consuming process. Previous qualitative studies hinted at how decomposing change sets into multiple yet internally coherent ones would improve the reviewing process. So far, literature provided no quantitative analysis of this hypothesis. Aims: (1) Quantitatively measure the effects of change decomposition on the outcome of code review (in terms of number of found defects, wrongly reported issues, suggested improvements, time, and understanding); (2) Qualitatively analyze how subjects approach the review and navigate the code, building knowledge and addressing existing issues, in large vs. decomposed changes. Method: Controlled experiment using the pull-based development model involving 28 software developers among professionals and graduate students. Results: Change decomposition leads to fewer wrongly reported issues, influences how subjects approach and conduct the review activity (by increasing context-seeking), yet impacts neither understanding the change rationale nor the number of found defects. Conclusions: Change decomposition reduces the noise for subsequent data analyses but also significantly supports the tasks of the developers in charge of reviewing the changes. As such, commits belonging to different concepts should be separated, adopting this as a best practice in software engineering

    Micro-RNA mediated regulation of a cytokine factor: TNF-alpha: an exploration of gene expression control in proliferating and quiescent cells

    Full text link
    Two types mechanisms that control gene expression involve cis-regulatory factors and trans-regulatory factors. Cis-acting regulatory RNAs include targeted messenger RNA (mRNA) specificity and AU-rich elements (AREs). AU-rich mRNAs are a subcategory of mRNAs that have AREs in their 3'-Untranslated Regions (UTRs). These ARE-genes have been observed to correlate with rapid mRNA decay patterns. They comprise approximately 12% of all transcripts and are known to encode for a group of proteins that have involvement in the inflammatory response. Trans-acting regulatory mechanisms are micro RNAs (miRNAs) in eukaryotes, and small RNAs (sRNA) in prokaryotes. Misregulation of these mechanisms can lead to many disease states if rapid mRNA decay does not occur, leading to tumorigenesis, and eventually, different types of cancer. In this project, the TNF-α ARE was studied in both serum-positive and quiescent G0 conditions in order to analyze whether the translation of the gene differed in any respect due to the binding of a known miRNA called miR-130a. Additionally, both serum-positive and one-day serum-starved quiescent G0 conditions were analyzed for eIF5B and FXR1 levels to analyze whether there was a correlation between the two proteins

    Towards a self-evolving software defect detection process

    Get PDF
    Software defect detection research typically focuses on individual inspection and testing techniques. However, to be effective in applying defect detection techniques, it is important to recognize when to use inspection techniques and when to use testing techniques. In addition, it is important to know when to deliver a product and use maintenance activities, such as trouble shooting and bug fixing, to address the remaining defects in the software.To be more effective detecting software defects, not only should defect detection techniques be studied and compared, but the entire software defect detection process should be studied to give us a better idea of how it can be conducted, controlled, evaluated and improved.This thesis presents a self-evolving software defect detection process (SEDD) that provides a systematic approach to software defect detection and guides us as to when inspection, testing or maintenance activities are best performed. The approach is self-evolving in that it is continuously improved by assessing the outcome of the defect detection techniques in comparison with historical data.A software architecture and prototype implementation of the approach is also presented along with a case study that was conducted to validate the approach. Initial results of using the self-evolving defect detection approach are promising
    • …
    corecore