3 research outputs found

    Maximal-Capacity Discrete Memoryless Channel Identification

    Full text link
    The problem of identifying the channel with the highest capacity among several discrete memoryless channels (DMCs) is considered. The problem is cast as a pure-exploration multi-armed bandit problem, which follows the practical use of training sequences to sense the communication channel statistics. A capacity estimator is proposed and tight confidence bounds on the estimator error are derived. Based on this capacity estimator, a gap-elimination algorithm termed BestChanID is proposed, which is oblivious to the capacity-achieving input distribution and is guaranteed to output the DMC with the largest capacity, with a desired confidence. Furthermore, two additional algorithms NaiveChanSel and MedianChanEl, that output with certain confidence a DMC with capacity close to the maximal, are introduced. Each of those algorithms is beneficial in a different regime and can be used as a subroutine in BestChanID. The sample complexity of all algorithms is analyzed as a function of the desired confidence parameter, the number of channels, and the channels' input and output alphabet sizes. The cost of best channel identification is shown to scale quadratically with the alphabet size, and a fundamental lower bound for the required number of channel senses to identify the best channel with a certain confidence is derived

    The (Surprising) Sample Optimality of Greedy Procedures for Large-Scale Ranking and Selection

    Full text link
    Ranking and selection (R&S), which aims to select the best alternative with the largest mean performance from a finite set of alternatives, is a classic research topic in simulation optimization. Recently, considerable attention has turned towards the large-scale variant of the R&S problem which involves a large number of alternatives. Ideal large-scale R&S procedures should be sample optimal, i.e., the total sample size required to deliver an asymptotically non-zero probability of correct selection (PCS) grows at the minimal order (linear order) in the number of alternatives, but not many procedures in the literature are sample optimal. Surprisingly, we discover that the na\"ive greedy procedure, which keeps sampling the alternative with the largest running average, performs strikingly well and appears sample optimal. To understand this discovery, we develop a new boundary-crossing perspective and prove that the greedy procedure is indeed sample optimal. We further show that the derived PCS lower bound is asymptotically tight for the slippage configuration of means with a common variance. Moreover, we propose the explore-first greedy (EFG) procedure and its enhanced version (EFG+^+ procedure) by adding an exploration phase to the na\"ive greedy procedure. Both procedures are proven to be sample optimal and consistent. Last, we conduct extensive numerical experiments to empirically understand the performance of our greedy procedures in solving large-scale R&S problems
    corecore