1,312 research outputs found

    Modeling and Estimation for Real-Time Microarrays

    Get PDF
    Microarrays are used for collecting information about a large number of different genomic particles simultaneously. Conventional fluorescent-based microarrays acquire data after the hybridization phase. During this phase, the target analytes (e.g., DNA fragments) bind to the capturing probes on the array and, by the end of it, supposedly reach a steady state. Therefore, conventional microarrays attempt to detect and quantify the targets with a single data point taken in the steady state. On the other hand, a novel technique, the so-called real-time microarray, capable of recording the kinetics of hybridization in fluorescent-based microarrays has recently been proposed. The richness of the information obtained therein promises higher signal-to-noise ratio, smaller estimation error, and broader assay detection dynamic range compared to conventional microarrays. In this paper, we study the signal processing aspects of the real-time microarray system design. In particular, we develop a probabilistic model for real-time microarrays and describe a procedure for the estimation of target amounts therein. Moreover, leveraging on system identification ideas, we propose a novel technique for the elimination of cross hybridization. These are important steps toward developing optimal detection algorithms for real-time microarrays, and to understanding their fundamental limitations

    On joint maximum-likelihood estimation of PCR efficiency and initial amount of target

    Get PDF
    We consider the problem of estimating unknown parameters of the real-time polymerase chain reaction (RTPCR) from noisy observations. The joint ML estimator of the RT-PCR efficiency and the initial number of DNA target molecules is derived. The mean-square error performance of the estimator is studied via simulations. The simulation results indicate that the proposed estimator significantly outperforms a competing technique

    ML Estimation of DNA Initial Copy Number in Polymerase Chain Reaction (PCR) Processes

    Get PDF
    Estimation of DNA copy number in a given biological sample is an extremely important problem in genomics. This problem is especially challenging when the number of the DNA strands is minuscule, which is often the case in applications such as pathogen and genetic mutation detection. A recently developed technique, real-time polymerase chain reaction (PCR), amplifies the number of initial target molecules by replicating them through a series of thermal cycles. Ideally, the number of target molecules doubles at the end of each cycle. However, in practice, due to biochemical noise the efficiency of the PCR reaction, defined as the fraction of target molecules which are successfully copied during a cycle, is always less than 1. In this paper, we formulate the problem of joint maximum-likelihood estimation of the PCR efficiency and the initial DNA copy number. As indicated by simulation studies, the performance of the proposed estimator is superior with respect to competing statistical approaches. Moreover, we compute the Cramer-Rao lower bound on the mean-square estimation error

    On Limits of Performance of DNA Microarrays

    Get PDF
    DNA microarray technology relies on the hybridization process which is stochastic in nature. Probabilistic cross-hybridization of non-specific targets, as well as the shot-noise originating from specific targets binding, are among the many obstacles for achieving high accuracy in DNA microarray analysis. In this paper, we use statistical model of hybridization and cross-hybridization processes to derive a lower bound (viz., the Cramer-Rao bound) on the minimum mean-square error of the target concentrations estimation. A preliminary study of the Cramer-Rao bound for estimating the target concentrations suggests that, in some regimes, cross-hybridization may, in fact, be beneficial—a result with potential ramifications for probe design, which is currently focused on minimizing cross-hybridization

    Modeling the kinetics of hybridization in microarrays

    Get PDF
    Conventional fluorescent-based microarrays acquire data after the hybridization phase. In this phase the targets analytes (i.e., DNA fragments) bind to the capturing probes on the array and supposedly reach a steady state. Accordingly, microarray experiments essentially provide only a single, steady-state data point of the hybridization process. On the other hand, a novel technique (i.e., realtime microarrays) capable of recording the kinetics of hybridization in fluorescent-based microarrays has recently been proposed in [5]. The richness of the information obtained therein promises higher signal-to-noise ratio, smaller estimation error, and broader assay detection dynamic range compared to the conventional microarrays. In the current paper, we develop a probabilistic model of the kinetics of hybridization and describe a procedure for the estimation of its parameters which include the binding rate and target concentration. This probabilistic model is an important step towards developing optimal detection algorithms for the microarrays which measure the kinetics of hybridization, and to understanding their fundamental limitations

    Existence of codes with constant PMEPR and related design

    Get PDF
    Recently, several coding methods have been proposed to reduce the high peak-to-mean envelope ratio (PMEPR) of multicarrier signals. It has also been shown that with probability one, the PMEPR of any random codeword chosen from a symmetric quadrature amplitude modulation/phase shift keying (QAM/PSK) constellation is logn for large n, where n is the number of subcarriers. Therefore, the question is how much reduction beyond logn can one asymptotically achieve with coding, and what is the price in terms of the rate loss? In this paper, by optimally choosing the sign of each subcarrier, we prove the existence of q-ary codes of constant PMEPR for sufficiently large n and with a rate loss of at most log/sub q/2. We also obtain a Varsharmov-Gilbert-type upper bound on the rate of a code, given its minimum Hamming distance with constant PMEPR, for large n. Since ours is an existence result, we also study the problem of designing signs for PMEPR reduction. Motivated by a derandomization algorithm suggested by Spencer, we propose a deterministic and efficient algorithm to design signs such that the PMEPR of the resulting codeword is less than clogn for any n, where c is a constant independent of n. For symmetric q-ary constellations, this algorithm constructs a code with rate 1-log/sub q/2 and with PMEPR of clogn with simple encoding and decoding. Simulation results for our algorithm are presented

    Maximum-Likelihood Sequence Detection of Multiple Antenna Systems over Dispersive Channels via Sphere Decoding

    Get PDF
    Multiple antenna systems are capable of providing high data rate transmissions over wireless channels. When the channels are dispersive, the signal at each receive antenna is a combination of both the current and past symbols sent from all transmit antennas corrupted by noise. The optimal receiver is a maximum-likelihood sequence detector and is often considered to be practically infeasible due to high computational complexity (exponential in number of antennas and channel memory). Therefore, in practice, one often settles for a less complex suboptimal receiver structure, typically with an equalizer meant to suppress both the intersymbol and interuser interference, followed by the decoder. We propose a sphere decoding for the sequence detection in multiple antenna communication systems over dispersive channels. The sphere decoding provides the maximum-likelihood estimate with computational complexity comparable to the standard space-time decision-feedback equalizing (DFE) algorithms. The performance and complexity of the sphere decoding are compared with the DFE algorithm by means of simulations

    Manifold Optimization Over the Set of Doubly Stochastic Matrices: A Second-Order Geometry

    Get PDF
    Convex optimization is a well-established research area with applications in almost all fields. Over the decades, multiple approaches have been proposed to solve convex programs. The development of interior-point methods allowed solving a more general set of convex programs known as semi-definite programs and second-order cone programs. However, it has been established that these methods are excessively slow for high dimensions, i.e., they suffer from the curse of dimensionality. On the other hand, optimization algorithms on manifold have shown great ability in finding solutions to nonconvex problems in reasonable time. This paper is interested in solving a subset of convex optimization using a different approach. The main idea behind Riemannian optimization is to view the constrained optimization problem as an unconstrained one over a restricted search space. The paper introduces three manifolds to solve convex programs under particular box constraints. The manifolds, called the doubly stochastic, symmetric and the definite multinomial manifolds, generalize the simplex also known as the multinomial manifold. The proposed manifolds and algorithms are well-adapted to solving convex programs in which the variable of interest is a multidimensional probability distribution function. Theoretical analysis and simulation results testify the efficiency of the proposed method over state of the art methods. In particular, they reveal that the proposed framework outperforms conventional generic and specialized solvers, especially in high dimensions

    New Null Space Results and Recovery Thresholds for Matrix Rank Minimization

    Get PDF
    Nuclear norm minimization (NNM) has recently gained significant attention for its use in rank minimization problems. Similar to compressed sensing, using null space characterizations, recovery thresholds for NNM have been studied in \cite{arxiv,Recht_Xu_Hassibi}. However simulations show that the thresholds are far from optimal, especially in the low rank region. In this paper we apply the recent analysis of Stojnic for compressed sensing \cite{mihailo} to the null space conditions of NNM. The resulting thresholds are significantly better and in particular our weak threshold appears to match with simulation results. Further our curves suggest for any rank growing linearly with matrix size nn we need only three times of oversampling (the model complexity) for weak recovery. Similar to \cite{arxiv} we analyze the conditions for weak, sectional and strong thresholds. Additionally a separate analysis is given for special case of positive semidefinite matrices. We conclude by discussing simulation results and future research directions.Comment: 28 pages, 2 figure

    On the existence of codes with constant bounded PMEPR for multicarrier signals

    Get PDF
    It has been shown that with probability one the peak to mean envelope power ratio (PMEPR) of any random codeword chosen from a symmetric QAM/PSK constellation is log n where n is the number of subcarriers [1]. In this paper, the existence of codes with nonzero rate and PMEPR bounded by a constant is established
    corecore