3,879 research outputs found

    A statistical model for microarrays, optimal estimation algorithms, and limits of performance

    Get PDF
    DNA microarray technology relies on the hybridization process, which is stochastic in nature. Currently, probabilistic cross hybridization of nonspecific targets, as well as the shot noise (Poisson noise) originating from specific targets binding, are among the main obstacles for achieving high accuracy in DNA microarray analysis. In this paper, statistical techniques are used to model the hybridization and cross-hybridization processes and, based on the model, optimal algorithms are employed to detect the targets and to estimate their quantities. To verify the theory, two sets of microarray experiments are conducted: one with oligonucleotide targets and the other with complementary DNA (cDNA) targets in the presence of biological background. Both experiments indicate that, by appropriately modeling the cross-hybridization interference, significant improvement in the accuracy over conventional methods such as direct readout can be obtained. This substantiates the fact that the accuracy of microarrays can become exclusively noise limited, rather than interference (i.e., cross-hybridization) limited. The techniques presented in this paper potentially increase considerably the signal-to-noise ratio (SNR), dynamic range, and resolution of DNA and protein microarrays as well as other affinity-based biosensors. A preliminary study of the Cramer-Rao bound for estimating the target concentrations suggests that, in some regimes, cross hybridization may even be beneficial-a result with potential ramifications for probe design, which is currently focused on minimizing cross hybridization. Finally, in its current form, the proposed method is best suited to low-density arrays arising in diagnostics, single nucleotide polymorphism (SNP) detection, toxicology, etc. How to scale it to high-density arrays (with many thousands of spots) is an interesting challenge

    A statistical model for microarrays, optimal estimation algorithms, and limits of performance

    Full text link

    On Limits of Performance of DNA Microarrays

    Get PDF
    DNA microarray technology relies on the hybridization process which is stochastic in nature. Probabilistic cross-hybridization of non-specific targets, as well as the shot-noise originating from specific targets binding, are among the many obstacles for achieving high accuracy in DNA microarray analysis. In this paper, we use statistical model of hybridization and cross-hybridization processes to derive a lower bound (viz., the Cramer-Rao bound) on the minimum mean-square error of the target concentrations estimation. A preliminary study of the Cramer-Rao bound for estimating the target concentrations suggests that, in some regimes, cross-hybridization may, in fact, be beneficial—a result with potential ramifications for probe design, which is currently focused on minimizing cross-hybridization

    Recovering Sparse Signals Using Sparse Measurement Matrices in Compressed DNA Microarrays

    Get PDF
    Microarrays (DNA, protein, etc.) are massively parallel affinity-based biosensors capable of detecting and quantifying a large number of different genomic particles simultaneously. Among them, DNA microarrays comprising tens of thousands of probe spots are currently being employed to test multitude of targets in a single experiment. In conventional microarrays, each spot contains a large number of copies of a single probe designed to capture a single target, and, hence, collects only a single data point. This is a wasteful use of the sensing resources in comparative DNA microarray experiments, where a test sample is measured relative to a reference sample. Typically, only a fraction of the total number of genes represented by the two samples is differentially expressed, and, thus, a vast number of probe spots may not provide any useful information. To this end, we propose an alternative design, the so-called compressed microarrays, wherein each spot contains copies of several different probes and the total number of spots is potentially much smaller than the number of targets being tested. Fewer spots directly translates to significantly lower costs due to cheaper array manufacturing, simpler image acquisition and processing, and smaller amount of genomic material needed for experiments. To recover signals from compressed microarray measurements, we leverage ideas from compressive sampling. For sparse measurement matrices, we propose an algorithm that has significantly lower computational complexity than the widely used linear-programming-based methods, and can also recover signals with less sparsity

    Modeling the kinetics of hybridization in microarrays

    Get PDF
    Conventional fluorescent-based microarrays acquire data after the hybridization phase. In this phase the targets analytes (i.e., DNA fragments) bind to the capturing probes on the array and supposedly reach a steady state. Accordingly, microarray experiments essentially provide only a single, steady-state data point of the hybridization process. On the other hand, a novel technique (i.e., realtime microarrays) capable of recording the kinetics of hybridization in fluorescent-based microarrays has recently been proposed in [5]. The richness of the information obtained therein promises higher signal-to-noise ratio, smaller estimation error, and broader assay detection dynamic range compared to the conventional microarrays. In the current paper, we develop a probabilistic model of the kinetics of hybridization and describe a procedure for the estimation of its parameters which include the binding rate and target concentration. This probabilistic model is an important step towards developing optimal detection algorithms for the microarrays which measure the kinetics of hybridization, and to understanding their fundamental limitations

    Modeling and Estimation for Real-Time Microarrays

    Get PDF
    Microarrays are used for collecting information about a large number of different genomic particles simultaneously. Conventional fluorescent-based microarrays acquire data after the hybridization phase. During this phase, the target analytes (e.g., DNA fragments) bind to the capturing probes on the array and, by the end of it, supposedly reach a steady state. Therefore, conventional microarrays attempt to detect and quantify the targets with a single data point taken in the steady state. On the other hand, a novel technique, the so-called real-time microarray, capable of recording the kinetics of hybridization in fluorescent-based microarrays has recently been proposed. The richness of the information obtained therein promises higher signal-to-noise ratio, smaller estimation error, and broader assay detection dynamic range compared to conventional microarrays. In this paper, we study the signal processing aspects of the real-time microarray system design. In particular, we develop a probabilistic model for real-time microarrays and describe a procedure for the estimation of target amounts therein. Moreover, leveraging on system identification ideas, we propose a novel technique for the elimination of cross hybridization. These are important steps toward developing optimal detection algorithms for real-time microarrays, and to understanding their fundamental limitations

    Towards large scale continuous EDA: a random matrix theory perspective

    Get PDF
    Estimation of distribution algorithms (EDA) are a major branch of evolutionary algorithms (EA) with some unique advantages in principle. They are able to take advantage of correlation structure to drive the search more efficiently, and they are able to provide insights about the structure of the search space. However, model building in high dimensions is extremely challenging and as a result existing EDAs lose their strengths in large scale problems. Large scale continuous global optimisation is key to many real world problems of modern days. Scaling up EAs to large scale problems has become one of the biggest challenges of the field. This paper pins down some fundamental roots of the problem and makes a start at developing a new and generic framework to yield effective EDA-type algorithms for large scale continuous global optimisation problems. Our concept is to introduce an ensemble of random projections of the set of fittest search points to low dimensions as a basis for developing a new and generic divide-and-conquer methodology. This is rooted in the theory of random projections developed in theoretical computer science, and will exploit recent advances of non-asymptotic random matrix theory

    Stratification bias in low signal microarray studies

    Get PDF
    BACKGROUND: When analysing microarray and other small sample size biological datasets, care is needed to avoid various biases. We analyse a form of bias, stratification bias, that can substantially affect analyses using sample-reuse validation techniques and lead to inaccurate results. This bias is due to imperfect stratification of samples in the training and test sets and the dependency between these stratification errors, i.e. the variations in class proportions in the training and test sets are negatively correlated. RESULTS: We show that when estimating the performance of classifiers on low signal datasets (i.e. those which are difficult to classify), which are typical of many prognostic microarray studies, commonly used performance measures can suffer from a substantial negative bias. For error rate this bias is only severe in quite restricted situations, but can be much larger and more frequent when using ranking measures such as the receiver operating characteristic (ROC) curve and area under the ROC (AUC). Substantial biases are shown in simulations and on the van 't Veer breast cancer dataset. The classification error rate can have large negative biases for balanced datasets, whereas the AUC shows substantial pessimistic biases even for imbalanced datasets. In simulation studies using 10-fold cross-validation, AUC values of less than 0.3 can be observed on random datasets rather than the expected 0.5. Further experiments on the van 't Veer breast cancer dataset show these biases exist in practice. CONCLUSION: Stratification bias can substantially affect several performance measures. In computing the AUC, the strategy of pooling the test samples from the various folds of cross-validation can lead to large biases; computing it as the average of per-fold estimates avoids this bias and is thus the recommended approach. As a more general solution applicable to other performance measures, we show that stratified repeated holdout and a modified version of k-fold cross-validation, balanced, stratified cross-validation and balanced leave-one-out cross-validation, avoids the bias. Therefore for model selection and evaluation of microarray and other small biological datasets, these methods should be used and unstratified versions avoided. In particular, the commonly used (unbalanced) leave-one-out cross-validation should not be used to estimate AUC for small datasets

    Techniques for clustering gene expression data

    Get PDF
    Many clustering techniques have been proposed for the analysis of gene expression data obtained from microarray experiments. However, choice of suitable method(s) for a given experimental dataset is not straightforward. Common approaches do not translate well and fail to take account of the data profile. This review paper surveys state of the art applications which recognises these limitations and implements procedures to overcome them. It provides a framework for the evaluation of clustering in gene expression analyses. The nature of microarray data is discussed briefly. Selected examples are presented for the clustering methods considered
    corecore