343,192 research outputs found

    Power Allocation in Multiuser Parallel Gaussian Broadcast Channels With Common and Confidential Messages

    Get PDF
    We consider a broadcast communication over parallel channels, where the transmitter sends K+1 messages: one common message to all users, and K confidential messages to each user, which need to be kept secret from all unintended users. We assume partial channel state information at the transmitter, stemming from noisy channel estimation. Our main goal is to design a power allocation algorithm in order to maximize the weighted sum rate of common and confidential messages under a total power constraint. The resulting problem for joint encoding across channels is formulated as the cascade of two problems, the inner min problem being discrete, and the outer max problem being convex. Thereby, efficient algorithms for this kind of optimization program can be used as solutions to our power allocation problem. For the special case K=2 , we provide an almost closed-form solution, where only two single variables must be optimized, e.g., through dichotomic searches. To reduce computational complexity, we propose three new algorithms, maximizing the weighted sum rate achievable by two suboptimal schemes that perform per-user and per-channel encoding. By numerical results, we assess the performance of all proposed algorithms as a function of different system parameters

    Convergence in Distribution of Randomized Algorithms: The Case of Partially Separable Optimization

    Full text link
    We present a Markov-chain analysis of blockwise-stochastic algorithms for solving partially block-separable optimization problems. Our main contributions to the extensive literature on these methods are statements about the Markov operators and distributions behind the iterates of stochastic algorithms, and in particular the regularity of Markov operators and rates of convergence of the distributions of the corresponding Markov chains. This provides a detailed characterization of the moments of the sequences beyond just the expected behavior. This also serves as a case study of how randomization restores favorable properties to algorithms that iterations of only partial information destroys. We demonstrate this on stochastic blockwise implementations of the forward-backward and Douglas-Rachford algorithms for nonconvex (and, as a special case, convex), nonsmooth optimization.Comment: 25 pages, 43 reference

    Sequential learning without feedback

    Full text link
    In many security and healthcare systems a sequence of features/sensors/tests are used for detection and diagnosis. Each test outputs a prediction of the latent state, and carries with it inherent costs. Our objective is to {\it learn} strategies for selecting tests to optimize accuracy \& costs. Unfortunately it is often impossible to acquire in-situ ground truth annotations and we are left with the problem of unsupervised sensor selection (USS). We pose USS as a version of stochastic partial monitoring problem with an {\it unusual} reward structure (even noisy annotations are unavailable). Unsurprisingly no learner can achieve sublinear regret without further assumptions. To this end we propose the notion of weak-dominance. This is a condition on the joint probability distribution of test outputs and latent state and says that whenever a test is accurate on an example, a later test in the sequence is likely to be accurate as well. We empirically verify that weak dominance holds on real datasets and prove that it is a maximal condition for achieving sublinear regret. We reduce USS to a special case of multi-armed bandit problem with side information and develop polynomial time algorithms that achieve sublinear regret

    Sequential learning without feedback

    Full text link
    In many security and healthcare systems a sequence of features/sensors/tests are used for detection and diagnosis. Each test outputs a prediction of the latent state, and carries with it inherent costs. Our objective is to {\it learn} strategies for selecting tests to optimize accuracy \& costs. Unfortunately it is often impossible to acquire in-situ ground truth annotations and we are left with the problem of unsupervised sensor selection (USS). We pose USS as a version of stochastic partial monitoring problem with an {\it unusual} reward structure (even noisy annotations are unavailable). Unsurprisingly no learner can achieve sublinear regret without further assumptions. To this end we propose the notion of weak-dominance. This is a condition on the joint probability distribution of test outputs and latent state and says that whenever a test is accurate on an example, a later test in the sequence is likely to be accurate as well. We empirically verify that weak dominance holds on real datasets and prove that it is a maximal condition for achieving sublinear regret. We reduce USS to a special case of multi-armed bandit problem with side information and develop polynomial time algorithms that achieve sublinear regret

    Low-delay Wireless Scheduling with Partial Channel-state Information

    Get PDF
    Abstract-We consider a server serving a time-slotted queued system of multiple packet-based flows, where not more than one flow can be serviced in a single time slot. The flows have exogenous packet arrivals and time-varying service rates. At each time, the server can observe instantaneous service rates for only a subset of flows (selected from a fixed collection of observable subsets) before scheduling a flow in the subset for service. We are interested in queue-length aware scheduling to keep the queues short. The limited availability of instantaneous service rate information requires the scheduler to make a careful choice of which subset of service rates to sample. We develop scheduling algorithms that use only partial service rate information from subsets of channels, and that minimize the likelihood of queue overflow in the system. Specifically, we present a new joint subset-sampling and scheduling algorithm called Max-Exp that uses only the current queue lengths to pick a subset of flows, and subsequently schedules a flow using the Exponential rule. When the collection of observable subsets is disjoint, we show that Max-Exp achieves the best exponential decay rate, among all scheduling algorithms using partial information, of the tail of the longest queue in the system. To accomplish this, we introduce novel analytical techniques for studying the performance of scheduling algorithms using partial state information, that are of independent interest. These include new sample-path large deviations results for processes obtained by nonrandom, predictable sampling of sequences of independent and identically distributed random variables, which show that scheduling with partial state information yields a rate function significantly different from the case of full information. As a special case, Max-Exp reduces to simply serving the flow with the longest queue when the observable subsets are singleton flows, i.e., when there is effectively no a priori channel-state information; thus, our results show that this greedy scheduling policy is large-deviations optimal
    • …
    corecore