18,813 research outputs found

    An Optimal Medium Access Control with Partial Observations for Sensor Networks

    Get PDF
    We consider medium access control (MAC) in multihop sensor networks, where only partial information about the shared medium is available to the transmitter. We model our setting as a queuing problem in which the service rate of a queue is a function of a partially observed Markov chain representing the available bandwidth, and in which the arrivals are controlled based on the partial observations so as to keep the system in a desirable mildly unstable regime. The optimal controller for this problem satisfies a separation property: we first compute a probability measure on the state space of the chain, namely the information state, then use this measure as the new state on which the control decisions are based. We give a formal description of the system considered and of its dynamics, we formalize and solve an optimal control problem, and we show numerical simulations to illustrate with concrete examples properties of the optimal control law. We show how the ergodic behavior of our queuing model is characterized by an invariant measure over all possible information states, and we construct that measure. Our results can be specifically applied for designing efficient and stable algorithms for medium access control in multiple-accessed systems, in particular for sensor networks

    Active Classification for POMDPs: a Kalman-like State Estimator

    Full text link
    The problem of state tracking with active observation control is considered for a system modeled by a discrete-time, finite-state Markov chain observed through conditionally Gaussian measurement vectors. The measurement model statistics are shaped by the underlying state and an exogenous control input, which influence the observations' quality. Exploiting an innovations approach, an approximate minimum mean-squared error (MMSE) filter is derived to estimate the Markov chain system state. To optimize the control strategy, the associated mean-squared error is used as an optimization criterion in a partially observable Markov decision process formulation. A stochastic dynamic programming algorithm is proposed to solve for the optimal solution. To enhance the quality of system state estimates, approximate MMSE smoothing estimators are also derived. Finally, the performance of the proposed framework is illustrated on the problem of physical activity detection in wireless body sensing networks. The power of the proposed framework lies within its ability to accommodate a broad spectrum of active classification applications including sensor management for object classification and tracking, estimation of sparse signals and radar scheduling.Comment: 38 pages, 6 figure

    Event-Driven Optimal Feedback Control for Multi-Antenna Beamforming

    Full text link
    Transmit beamforming is a simple multi-antenna technique for increasing throughput and the transmission range of a wireless communication system. The required feedback of channel state information (CSI) can potentially result in excessive overhead especially for high mobility or many antennas. This work concerns efficient feedback for transmit beamforming and establishes a new approach of controlling feedback for maximizing net throughput, defined as throughput minus average feedback cost. The feedback controller using a stationary policy turns CSI feedback on/off according to the system state that comprises the channel state and transmit beamformer. Assuming channel isotropy and Markovity, the controller's state reduces to two scalars. This allows the optimal control policy to be efficiently computed using dynamic programming. Consider the perfect feedback channel free of error, where each feedback instant pays a fixed price. The corresponding optimal feedback control policy is proved to be of the threshold type. This result holds regardless of whether the controller's state space is discretized or continuous. Under the threshold-type policy, feedback is performed whenever a state variable indicating the accuracy of transmit CSI is below a threshold, which varies with channel power. The practical finite-rate feedback channel is also considered. The optimal policy for quantized feedback is proved to be also of the threshold type. The effect of CSI quantization is shown to be equivalent to an increment on the feedback price. Moreover, the increment is upper bounded by the expected logarithm of one minus the quantization error. Finally, simulation shows that feedback control increases net throughput of the conventional periodic feedback by up to 0.5 bit/s/Hz without requiring additional bandwidth or antennas.Comment: 29 pages; submitted for publicatio

    On the connections between PCTL and Dynamic Programming

    Full text link
    Probabilistic Computation Tree Logic (PCTL) is a well-known modal logic which has become a standard for expressing temporal properties of finite-state Markov chains in the context of automated model checking. In this paper, we give a definition of PCTL for noncountable-space Markov chains, and we show that there is a substantial affinity between certain of its operators and problems of Dynamic Programming. After proving some uniqueness properties of the solutions to the latter, we conclude the paper with two examples to show that some recovery strategies in practical applications, which are naturally stated as reach-avoid problems, can be actually viewed as particular cases of PCTL formulas.Comment: Submitte

    Large deviation asymptotics and control variates for simulating large functions

    Full text link
    Consider the normalized partial sums of a real-valued function FF of a Markov chain, ϕn:=n1k=0n1F(Φ(k)),n1.\phi_n:=n^{-1}\sum_{k=0}^{n-1}F(\Phi(k)),\qquad n\ge1. The chain {Φ(k):k0}\{\Phi(k):k\ge0\} takes values in a general state space X\mathsf {X}, with transition kernel PP, and it is assumed that the Lyapunov drift condition holds: PVVW+bICPV\le V-W+b\mathbb{I}_C where V:X(0,)V:\mathsf {X}\to(0,\infty), W:X[1,)W:\mathsf {X}\to[1,\infty), the set CC is small and WW dominates FF. Under these assumptions, the following conclusions are obtained: 1. It is known that this drift condition is equivalent to the existence of a unique invariant distribution π\pi satisfying π(W)<\pi(W)<\infty, and the law of large numbers holds for any function FF dominated by WW: ϕnϕ:=π(F),a.s.,n.\phi_n\to\phi:=\pi(F),\qquad{a.s.}, n\to\infty. 2. The lower error probability defined by P{ϕnc}\mathsf {P}\{\phi_n\le c\}, for c<ϕc<\phi, n1n\ge1, satisfies a large deviation limit theorem when the function FF satisfies a monotonicity condition. Under additional minor conditions an exact large deviations expansion is obtained. 3. If WW is near-monotone, then control-variates are constructed based on the Lyapunov function VV, providing a pair of estimators that together satisfy nontrivial large asymptotics for the lower and upper error probabilities. In an application to simulation of queues it is shown that exact large deviation asymptotics are possible even when the estimator does not satisfy a central limit theorem.Comment: Published at http://dx.doi.org/10.1214/105051605000000737 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore