1 research outputs found

    Robust and reliable decision-making systems and algorithms

    Get PDF
    We investigate robustness and reliability in decision-making systems and algorithms based on the tradeoff between cost and performance. We propose two abstract frameworks to investigate robustness and reliability concerns, which critically impact the design and analysis of systems and algorithms based on unreliable components. We consider robustness in online systems and algorithms under the framework of online optimization subject to adversarial perturbations. The framework of online optimization models a rich class of problems from information theory, machine learning, game theory, optimization, and signal processing. This is a repeated game framework where, on each round, a player selects an action from a decision set using a randomized strategy, and then Nature reveals a loss function for this action, for which the player incurs a loss. Through a worst case adversary framework to model the perturbations, we introduce a randomized algorithm that is provably robust even against such adversarial attacks. In particular, we show that this algorithm is Hannan-consistent with respect to a rich class of randomized strategies under mild regularity conditions. We next focus on reliability of decision-making systems and algorithms based on the problem of fusing several unreliable computational units that perform the same task under cost and fidelity constraints. In particular, we model the relationship between the fidelity of the outcome and the cost of computing it as an additive perturbation. We analyze performance of repetition-based strategies that distribute cost across several unreliable units and fuse their outcomes. When the cost is a convex function of fidelity, the optimal repetition-based strategy in terms of minimizing total incurred cost while achieving a target mean-square error performance may fuse several computational units. For concave and linear costs, a single more reliable unit incurs lower cost compared to fusion of several lower cost and less reliable units while achieving the same mean-square error (MSE) performance. We show how our results give insight into problems from theoretical neuroscience, circuits, and crowdsourcing. We finally study an application of a partial information extension of the cost-fidelity framework of this dissertation to a stochastic gradient descent problem, where the underlying cost-fidelity function is assumed to be unknown. We present a generic framework for trading off fidelity and cost in computing stochastic gradients when the costs of acquiring stochastic gradients of different quality are not known a priori. We consider a mini-batch oracle that distributes a limited query budget over a number of stochastic gradients and aggregates them to estimate the true gradient. Since the optimal mini-batch size depends on the unknown cost fidelity function, we propose an algorithm, EE-Grad, that sequentially explores the performance of mini-batch oracles and exploits the accumulated knowledge to estimate the one achieving the best performance in terms of cost efficiency. We provide performance guarantees for EE-Grad with respect to the optimal mini-batch oracle, and illustrate these results in the case of strongly convex objectives
    corecore