16 research outputs found

    Dispatching to Parallel Servers: Solutions of Poisson's Equation for First-Policy Improvement

    Get PDF
    Policy iteration techniques for multiple-server dispatching rely on the computation of value functions. In this context, we consider the continuous-space M/G/1-FCFS queue endowed with an arbitrarily-designed cost function for the waiting times of the incoming jobs. The associated value function is a solution of Poisson's equation for Markov chains, which in this work we solve in the Laplace transform domain by considering an ancillary, underlying stochastic process extended to (imaginary) negative backlog states. This construction enables us to issue closed-form value functions for polynomial and exponential cost functions and for piecewise compositions of the latter, in turn permitting the derivation of interval bounds for the value function in the form of power series or trigonometric sums. We review various cost approximation schemes and assess the convergence of the interval bounds these induce on the value function. Namely: Taylor expansions (divergent, except for a narrow class of entire functions with low orders of growth), and uniform approximation schemes (polynomials, trigonometric), which achieve optimal convergence rates over finite intervals. This study addresses all the steps to implementing dispatching policies for systems of parallel servers, from the specification of general cost functions towards the computation of interval bounds for the value functions and the exact implementation of the first-policy improvement step.Comment: 34 pages, including 6 figures and 4 appendices; supplementary material (11 pages) available under 'Ancillary files'. Submitted for publicatio

    Fast Optimization with Zeroth-Order Feedback in Distributed, Multi-User MIMO Systems

    Get PDF
    In this paper, we develop a gradient-free optimization methodology for efficient resource allocation in Gaussian MIMO multiple access channels. Our approach combines two main ingredients: (i) an entropic semidefinite optimization based on matrix exponential learning (MXL); and (ii) a one-shot gradient estimator which achieves low variance through the reuse of past information. This novel algorithm, which we call gradient-free MXL algorithm with callbacks (MXL0+^{+}), retains the convergence speed of gradient-based methods while requiring minimal feedback per iteration-a single scalar. In more detail, in a MIMO multiple access channel with KK users and MM transmit antennas per user, the MXL0+^{+} algorithm achieves ϵ\epsilon-optimality within poly(K,M)/ϵ2\text{poly}(K,M)/\epsilon^2 iterations (on average and with high probability), even when implemented in a fully distributed, asynchronous manner. For cross-validation, we also perform a series of numerical experiments in medium- to large-scale MIMO networks under realistic channel conditions. Throughout our experiments, the performance of MXL0+^{+} matches-and sometimes exceeds-that of gradient-based MXL methods, all the while operating with a vastly reduced communication overhead. In view of these findings, the MXL0+^{+} algorithm appears to be uniquely suited for distributed massive MIMO systems where gradient calculations can become prohibitively expensive.Comment: Final version; to appear in IEEE Transactions on Signal Processing; 16 pages, 4 figure

    Derivative-Free Optimization over Multi-User MIMO Networks

    Get PDF
    Accepted for presentation at the 10th International Conference on NETwork Games, COntrol and OPtimization (NETGCOOP), Cargèse.International audienceIn wireless communication, the full potential of multiple-input multiple-output (MIMO) arrays can only be realized through optimization of their transmission parameters. Distributed solutions dedicated to that end include iterative optimization algorithms involving the computation of the gradient of a given objective function, and its dissemination among the network users. In the context of large-scale MIMO, however, computing and conveying large arrays of function derivatives across a network has a prohibitive cost to communication standards. In this paper we show that multiuser MIMO networks can be optimized without using any derivative information. With focus on the throughput maximization problem in a MIMO multiple access channel, we propose a "derivative-free" optimization methodology relying on very little feedback information: a single function query at each iteration. Our approach integrates two complementary ingredients: exponential learning (a derivative-based expression of the mirror descent algorithm with entropic regularization), and a single-function-query gradient estimation technique derived from a classic approach to derivative-free optimization

    Parallel stochastic optimization based on descent algorithms

    Get PDF
    This study addresses the stochastic optimization of a function unknown in closed form which can only be estimated based on measurementsor simulations. We consider parallel implementations of a class of stochasticoptimization methods that consist of the iterative application of a descent algorithmto a sequence of approximation functions converging in some sense to the function of interest. After discussing classical parallel modes of implementations (Jacobi, Gauss-Seidel, random, Gauss-Southwell), we devise effort-savingimplementation modes where the pace of application of the considered descentalgorithm along individual coordinates is coordinated with the evolution of the estimated accuracy of the convergent function sequence. It is shown that this approach can be regarded as a Gauss-Southwell implementation of the initialmethod in an augmented space. As an example of application we study the distributed optimization of stochastic networks using a scaled gradient projection algorithm with approximate line search, for which asymptotic propertiesare derived

    Verteilte Methoden für konvexe Optimierung : Anwendung auf drahtlose kooperative Sensornetzwerke

    No full text
    Ein wichtiger Aspekt beim Betrieb drahtloser Sensornetzwerke ist die effiziente Nutzung der Energieresourcen der einzelnen Sensorknoten, die durch energieverbrauchender drahtloser Übertragung miteinander kommunizieren. Dabei ist es wichtig die gemeinsame Informationsübertragung in den Netzwerken zu regulieren und optimieren. Das Hauptthema dieser Arbeit ist die verteilte Zuordnung von Informationsflüssen in drahtlosen Sensornetzwerken. Es wird eine Klasse von Problemen untersucht, die sich als konvexe Optimierungsprobleme formulieren lassen. Insbesondere werden iterative Optimierungsalgorithmen entwickelt, die auf einer verteilten Implementierung der Gradientenprojektionsverfahren, eine weit verbreitete Optimierungsmethode mit einfacher prinzipieller und praktischer Realisierung, basieren. Eine genaue Betrachtung der globalen asymptotischen Konvergenzeigenschaften wird für unterschiedliche Ausführungen der Methode durchgeführt, mit besonderem Akzent auf sequentielle und zufallsbasierte Implementierungen, für die keine Synchronisation zwischen den Sensoren notwendig ist. Der zweite Teil der Arbeit beschäftigt sich mit der Optimierung von drahtlosen Sensornetzwerken mit zeitveränderlichen Eigenschaften, was als stochastisches Optimierungsproblem formuliert wird. Untersucht wird die Konvergenz von etablierten Optimierungsverfahren für zeitveränderliche Netzwerke, insbesondere die von verteilten Gradientenprojektionsverfahren, die im ersten Teil für zeitinvariante Netzwerke verwendet wurden.A main issue in wireless sensor networks is the efficient exploitation of the individual energy resources of the sensor nodes, which communicate with each other by means of energy-demanding wireless transmissions. To this end, it is essential to regulate and optimise the traffic of information cooperatively conveyed by the sensors across the networks. The central theme of the study is the problem of distributed allocation of information flows (routing) in wireless sensor networks. We are concerned, in particular, with a class of problems falling into the convex optimisation framework. Focus is set on a family of iterative optimisation algorithms based on distributed implementations of the gradient projection method—an accepted optimisation technique known for its simplicity in principle and realisation. An accurate exploration of the global and asymptotic convergence properties is carried out for several variants of the method, with emphasis on the sequential or random implementations, for which synchronism between the sensors is not required. In a later part of the report, we address the optimisation of wireless sensor networks with time-varying properties, and consider this new problem within the stochastic optimisation framework. Our efforts are directed toward questioning the convergence, in time-varying environments, of some accepted optimisation methods for invariant networks, and more particularly of the distributed gradient projection algorithms studied in the earlier part of the report

    Fault Detection by Desynchronized Kalman Filtering, Introduction to Robust Estimation

    No full text
    This paper deals with the state estimation of dynamic systems. A recursive linear MMSE estimator is presented as an alternative to Kalman filtering . This estimator has the ability to cope with asynchronous measurements, and to process the data by sets of undefined sizes. It is particularly suitable for fault detection, because the decisions can be based on more data. This is an open door to robust estimation. A mixed estimator robust to various failure scenarios is then derived by using the Bayesian approach. This mixed estimator is originally thought for applications requiring high integrity estimations. It is next tested on a rail navigation problem

    Distributed Methods for Convex Optimisation: Application to Cooperative Wireless Sensor Networks

    No full text
    Ein wichtiger Aspekt beim Betrieb drahtloser Sensornetzwerke ist die effiziente Nutzung der Energieresourcen der einzelnen Sensorknoten, die durch energieverbrauchender drahtloser Übertragung miteinander kommunizieren. Dabei ist es wichtig die gemeinsame Informationsübertragung in den Netzwerken zu regulieren und optimieren. Das Hauptthema dieser Arbeit ist die verteilte Zuordnung von Informationsflüssen in drahtlosen Sensornetzwerken. Es wird eine Klasse von Problemen untersucht, die sich als konvexe Optimierungsprobleme formulieren lassen. Insbesondere werden iterative Optimierungsalgorithmen entwickelt, die auf einer verteilten Implementierung der Gradientenprojektionsverfahren, eine weit verbreitete Optimierungsmethode mit einfacher prinzipieller und praktischer Realisierung, basieren. Eine genaue Betrachtung der globalen asymptotischen Konvergenzeigenschaften wird für unterschiedliche Ausführungen der Methode durchgeführt, mit besonderem Akzent auf sequentielle und zufallsbasierte Implementierungen, für die keine Synchronisation zwischen den Sensoren notwendig ist. Der zweite Teil der Arbeit beschäftigt sich mit der Optimierung von drahtlosen Sensornetzwerken mit zeitveränderlichen Eigenschaften, was als stochastisches Optimierungsproblem formuliert wird. Untersucht wird die Konvergenz von etablierten Optimierungsverfahren für zeitveränderliche Netzwerke, insbesondere die von verteilten Gradientenprojektionsverfahren, die im ersten Teil für zeitinvariante Netzwerke verwendet wurden

    Derivative-Free Optimization over Multi-User MIMO Networks

    Get PDF
    Accepted for presentation at the 10th International Conference on NETwork Games, COntrol and OPtimization (NETGCOOP), Cargèse.International audienceIn wireless communication, the full potential of multiple-input multiple-output (MIMO) arrays can only be realized through optimization of their transmission parameters. Distributed solutions dedicated to that end include iterative optimization algorithms involving the computation of the gradient of a given objective function, and its dissemination among the network users. In the context of large-scale MIMO, however, computing and conveying large arrays of function derivatives across a network has a prohibitive cost to communication standards. In this paper we show that multiuser MIMO networks can be optimized without using any derivative information. With focus on the throughput maximization problem in a MIMO multiple access channel, we propose a "derivative-free" optimization methodology relying on very little feedback information: a single function query at each iteration. Our approach integrates two complementary ingredients: exponential learning (a derivative-based expression of the mirror descent algorithm with entropic regularization), and a single-function-query gradient estimation technique derived from a classic approach to derivative-free optimization

    Dispatching fixed-sized jobs with multiple deadlines to parallel heterogeneous servers

    Get PDF
    We study the M/D/1 queue when jobs have firm deadlines for waiting (or sojourn) time. If a deadline is not met, a job-specific deadline violation cost is incurred. We derive explicit value functions for this M/D/1 queue that enable the development of efficient cost-aware dispatching policies to parallel servers. The performance of the resulting dispatching policies is evaluated by means of simulations.Peer reviewe
    corecore