9,016 research outputs found

    Bounding the Greedy Strategy in Finite-Horizon String Optimization

    Full text link
    We consider an optimization problem where the decision variable is a string of bounded length. For some time there has been an interest in bounding the performance of the greedy strategy for this problem. Here, we provide weakened sufficient conditions for the greedy strategy to be bounded by a factor of (1βˆ’(1βˆ’1/K)K)(1-(1-1/K)^K), where KK is the optimization horizon length. Specifically, we introduce the notions of KK-submodularity and KK-GO-concavity, which together are sufficient for this bound to hold. By introducing a notion of \emph{curvature} η∈(0,1]\eta\in(0,1], we prove an even tighter bound with the factor (1/Ξ·)(1βˆ’eβˆ’Ξ·)(1/\eta)(1-e^{-\eta}). Finally, we illustrate the strength of our results by considering two example applications. We show that our results provide weaker conditions on parameter values in these applications than in previous results.Comment: This paper has been accepted by 2015 IEEE CD

    Hypothesis Testing in Feedforward Networks with Broadcast Failures

    Full text link
    Consider a countably infinite set of nodes, which sequentially make decisions between two given hypotheses. Each node takes a measurement of the underlying truth, observes the decisions from some immediate predecessors, and makes a decision between the given hypotheses. We consider two classes of broadcast failures: 1) each node broadcasts a decision to the other nodes, subject to random erasure in the form of a binary erasure channel; 2) each node broadcasts a randomly flipped decision to the other nodes in the form of a binary symmetric channel. We are interested in whether there exists a decision strategy consisting of a sequence of likelihood ratio tests such that the node decisions converge in probability to the underlying truth. In both cases, we show that if each node only learns from a bounded number of immediate predecessors, then there does not exist a decision strategy such that the decisions converge in probability to the underlying truth. However, in case 1, we show that if each node learns from an unboundedly growing number of predecessors, then the decisions converge in probability to the underlying truth, even when the erasure probabilities converge to 1. We also derive the convergence rate of the error probability. In case 2, we show that if each node learns from all of its previous predecessors, then the decisions converge in probability to the underlying truth when the flipping probabilities of the binary symmetric channels are bounded away from 1/2. In the case where the flipping probabilities converge to 1/2, we derive a necessary condition on the convergence rate of the flipping probabilities such that the decisions still converge to the underlying truth. We also explicitly characterize the relationship between the convergence rate of the error probability and the convergence rate of the flipping probabilities
    • …
    corecore