655 research outputs found
Modeling and Estimation for Real-Time Microarrays
Microarrays are used for collecting information about a large number of different genomic particles simultaneously. Conventional fluorescent-based microarrays acquire data after the hybridization phase. During this phase, the target analytes (e.g., DNA fragments) bind to the capturing probes on the array and, by the end of it, supposedly reach a steady state. Therefore, conventional microarrays attempt to detect and quantify the targets with a single data point taken in the steady state. On the other hand, a novel technique, the so-called real-time microarray, capable of recording the kinetics of hybridization in fluorescent-based microarrays has recently been proposed. The richness of the information obtained therein promises higher signal-to-noise ratio, smaller estimation error, and broader assay detection dynamic range compared to conventional microarrays. In this paper, we study the signal processing aspects of the real-time microarray system design. In particular, we develop a probabilistic model for real-time microarrays and describe a procedure for the estimation of target amounts therein. Moreover, leveraging on system identification ideas, we propose a novel technique for the elimination of cross hybridization. These are important steps toward developing optimal detection algorithms for real-time microarrays, and to understanding their fundamental limitations
On the existence of codes with constant bounded PMEPR for multicarrier signals
It has been shown that with probability one the peak to mean envelope power ratio (PMEPR) of any random codeword chosen from a symmetric QAM/PSK constellation is log n where n is the number of subcarriers [1]. In this paper, the existence of
codes with nonzero rate and PMEPR bounded by a constant is established
Existence of codes with constant PMEPR and related design
Recently, several coding methods have been proposed to reduce the high peak-to-mean envelope ratio (PMEPR) of multicarrier signals. It has also been shown that with probability one, the PMEPR of any random codeword chosen from a symmetric quadrature amplitude modulation/phase shift keying (QAM/PSK) constellation is logn for large n, where n is the number of subcarriers. Therefore, the question is how much reduction beyond logn can one asymptotically achieve with coding, and what is the price in terms of the rate loss? In this paper, by optimally choosing the sign of each subcarrier, we prove the existence of q-ary codes of constant PMEPR for sufficiently large n and with a rate loss of at most log/sub q/2. We also obtain a Varsharmov-Gilbert-type upper bound on the rate of a code, given its minimum Hamming distance with constant PMEPR, for large n. Since ours is an existence result, we also study the problem of designing signs for PMEPR reduction. Motivated by a derandomization algorithm suggested by Spencer, we propose a deterministic and efficient algorithm to design signs such that the PMEPR of the resulting codeword is less than clogn for any n, where c is a constant independent of n. For symmetric q-ary constellations, this algorithm constructs a code with rate 1-log/sub q/2 and with PMEPR of clogn with simple encoding and decoding. Simulation results for our algorithm are presented
Maximum-Likelihood Sequence Detection of Multiple Antenna Systems over Dispersive Channels via Sphere Decoding
Multiple antenna systems are capable of providing high data rate transmissions over wireless channels. When the channels are dispersive, the signal at each receive antenna is a combination of both the current and past symbols sent from all transmit antennas corrupted by noise. The optimal receiver is a maximum-likelihood sequence detector and is often considered to be practically infeasible due to high computational complexity (exponential in number of antennas and channel memory). Therefore, in practice, one often settles for a less complex suboptimal receiver structure, typically with an equalizer meant to suppress both the intersymbol and interuser interference, followed by the decoder. We propose a sphere decoding for the sequence detection in multiple antenna communication systems over dispersive channels. The sphere decoding provides the maximum-likelihood estimate with computational complexity comparable to the standard space-time decision-feedback equalizing (DFE) algorithms. The performance and complexity of the sphere decoding are compared with the DFE algorithm by means of simulations
Manifold Optimization Over the Set of Doubly Stochastic Matrices: A Second-Order Geometry
Convex optimization is a well-established research area with applications in
almost all fields. Over the decades, multiple approaches have been proposed to
solve convex programs. The development of interior-point methods allowed
solving a more general set of convex programs known as semi-definite programs
and second-order cone programs. However, it has been established that these
methods are excessively slow for high dimensions, i.e., they suffer from the
curse of dimensionality. On the other hand, optimization algorithms on manifold
have shown great ability in finding solutions to nonconvex problems in
reasonable time. This paper is interested in solving a subset of convex
optimization using a different approach. The main idea behind Riemannian
optimization is to view the constrained optimization problem as an
unconstrained one over a restricted search space. The paper introduces three
manifolds to solve convex programs under particular box constraints. The
manifolds, called the doubly stochastic, symmetric and the definite multinomial
manifolds, generalize the simplex also known as the multinomial manifold. The
proposed manifolds and algorithms are well-adapted to solving convex programs
in which the variable of interest is a multidimensional probability
distribution function. Theoretical analysis and simulation results testify the
efficiency of the proposed method over state of the art methods. In particular,
they reveal that the proposed framework outperforms conventional generic and
specialized solvers, especially in high dimensions
New Null Space Results and Recovery Thresholds for Matrix Rank Minimization
Nuclear norm minimization (NNM) has recently gained significant attention for
its use in rank minimization problems. Similar to compressed sensing, using
null space characterizations, recovery thresholds for NNM have been studied in
\cite{arxiv,Recht_Xu_Hassibi}. However simulations show that the thresholds are
far from optimal, especially in the low rank region. In this paper we apply the
recent analysis of Stojnic for compressed sensing \cite{mihailo} to the null
space conditions of NNM. The resulting thresholds are significantly better and
in particular our weak threshold appears to match with simulation results.
Further our curves suggest for any rank growing linearly with matrix size
we need only three times of oversampling (the model complexity) for weak
recovery. Similar to \cite{arxiv} we analyze the conditions for weak, sectional
and strong thresholds. Additionally a separate analysis is given for special
case of positive semidefinite matrices. We conclude by discussing simulation
results and future research directions.Comment: 28 pages, 2 figure
Efficient Compressive Sensing with Deterministic Guarantees Using Expander Graphs
Compressive sensing is an emerging technology which can recover a sparse signal vector of dimension n via a much smaller number of measurements than n. However, the existing compressive sensing methods may still suffer from relatively high recovery complexity, such as O(n^3), or can only work efficiently when the signal is super sparse, sometimes without deterministic performance guarantees. In this paper, we propose a compressive sensing scheme with deterministic performance guarantees using expander-graphs-based measurement matrices and show that the signal recovery can be achieved with complexity O(n) even if the number of nonzero elements k grows linearly with n. We also investigate compressive sensing for approximately sparse signals using this new method. Moreover, explicit constructions of the considered expander graphs exist. Simulation results are given to show the performance and complexity of the new method
On the sphere-decoding algorithm II. Generalizations, second-order statistics, and applications to communications
In Part 1, we found a closed-form expression for the expected complexity of the sphere-decoding algorithm, both for the infinite and finite lattice. We continue the discussion in this paper by generalizing the results to the complex version of the problem and using the expected complexity expressions to determine situations where sphere decoding is practically feasible. In particular, we consider applications of sphere decoding to detection in multiantenna systems. We show that, for a wide range of signal-to-noise ratios (SNRs), rates, and numbers of antennas, the expected complexity is polynomial, in fact, often roughly cubic. Since many communications systems operate at noise levels for which the expected complexity turns out to be polynomial, this suggests that maximum-likelihood decoding, which was hitherto thought to be computationally intractable, can, in fact, be implemented in real-time-a result with many practical implications. To provide complexity information beyond the mean, we derive a closed-form expression for the variance of the complexity of sphere-decoding algorithm in a finite lattice. Furthermore, we consider the expected complexity of sphere decoding for channels with memory, where the lattice-generating matrix has a special Toeplitz structure. Results indicate that the expected complexity in this case is, too, polynomial over a wide range of SNRs, rates, data blocks, and channel impulse response lengths
Delay Considerations for Opportunistic Scheduling in Broadcast Fading Channels
We consider a single-antenna broadcast block fading
channel with n users where the transmission is packetbased.
We define the (packet) delay as the minimum number of channel uses that guarantees all n users successfully receive m packets. This is a more stringent notion of delay than average delay and is the worst case (access) delay among the users. A delay optimal scheduling scheme, such as round-robin, achieves the delay of mn. For the opportunistic scheduling (which is throughput optimal) where the transmitter sends the packet to the user with the best channel conditions at each channel use, we derive the mean and variance of the delay for any m and n. For large n and in a homogeneous network, it is proved that the expected delay in receiving one packet by all the receivers scales as n log n, as opposed to n for the round-robin scheduling. We also show that when m grows faster than (log n)^r, for some r > 1, then the delay scales as mn. This roughly determines the timescale required for the system to behave fairly in a homogeneous network. We then propose a scheme to significantly reduce the delay at the expense of a small throughput hit. We further look into the advantage of multiple transmit antennas on the delay. For a system with M antennas in the transmitter where at each channel use packets are sent to M different users, we obtain the expected delay in receiving one packet by all the users
- …