9 research outputs found

    Efficient model-free Q-factor approximation in value space via log-sum-exp neural networks

    Get PDF
    We propose an efficient technique for performing data-driven optimal control of discrete-time systems. In particular, we show that log-sum-exp (lselse) neural networks, which are smooth and convex universal approximators of convex functions, can be efficiently used to approximate Q-factors arising from finite-horizon optimal control problems with continuous state space. The key advantage of these networks over classical approximation techniques is that they are convex and hence readily amenable to efficient optimization

    Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting

    Full text link
    Many real-world applications require the prediction of long sequence time-series, such as electricity consumption planning. Long sequence time-series forecasting (LSTF) demands a high prediction capacity of the model, which is the ability to capture precise long-range dependency coupling between output and input efficiently. Recent studies have shown the potential of Transformer to increase the prediction capacity. However, there are several severe issues with Transformer that prevent it from being directly applicable to LSTF, including quadratic time complexity, high memory usage, and inherent limitation of the encoder-decoder architecture. To address these issues, we design an efficient transformer-based model for LSTF, named Informer, with three distinctive characteristics: (i) a ProbSparseProbSparse self-attention mechanism, which achieves O(LlogL)O(L \log L) in time complexity and memory usage, and has comparable performance on sequences' dependency alignment. (ii) the self-attention distilling highlights dominating attention by halving cascading layer input, and efficiently handles extreme long input sequences. (iii) the generative style decoder, while conceptually simple, predicts the long time-series sequences at one forward operation rather than a step-by-step way, which drastically improves the inference speed of long-sequence predictions. Extensive experiments on four large-scale datasets demonstrate that Informer significantly outperforms existing methods and provides a new solution to the LSTF problem.Comment: 8 pages (main), 5 pages (appendix) and to be appeared in AAAI202

    Log-sum-exp neural networks and posynomial models for convex and log-log-convex data

    No full text
    International audienceWe show that a one-layer feedforward neural network with exponential activation functions in the inner layer and logarithmic activation in the output neuron is an universal approximator of convex functions. Such a network represents a family of scaled log-sum exponential functions, here named LSET. Under a suitable exponential transformation, the class of LSET functions maps to a family of generalized posynomials GPOST, which we similarly show to be universal approximators for log-log-convex functions. A key feature of an LSET network is that, once it is trained on data, the resulting model is convex in the variables, which makes it readily amenable to efficient design based on convex optimization. Similarly, once a GPOST model is trained on data, it yields a posynomial model that can be efficiently optimized with respect to its variables by using geometric programming (GP). The proposed methodology is illustrated by two numerical examples, in which, first, models are constructed from simulation data of the two physical processes (namely, the level of vibration in a vehicle suspension system, and the peak power generated by the combustion of propane), and then optimization-based design is performed on these models

    Log-Sum-Exp Neural Networks and Posynomial Models for Convex and Log-Log-Convex Data

    No full text
    In this paper, we show that a one-layer feedforward neural network with exponential activation functions in the inner layer and logarithmic activation in the output neuron is a universal approximator of convex functions. Such a network represents a family of scaled log-sum exponential functions, here named log-sum-exp ( mathrmLSETmathrm {LSE}_{T} ). Under a suitable exponential transformation, the class of mathrmLSETmathrm {LSE}_{T} functions maps to a family of generalized posynomials mathrmGPOSTmathrm {GPOS}_{T} , which we similarly show to be universal approximators for log-log-convex functions. A key feature of an mathrmLSETmathrm {LSE}_{T} network is that, once it is trained on data, the resulting model is convex in the variables, which makes it readily amenable to efficient design based on convex optimization. Similarly, once a mathrmGPOSTmathrm {GPOS}_{T} model is trained on data, it yields a posynomial model that can be efficiently optimized with respect to its variables by using geometric programming (GP). The proposed methodology is illustrated by two numerical examples, in which, first, models are constructed from simulation data of the two physical processes (namely, the level of vibration in a vehicle suspension system, and the peak power generated by the combustion of propane), and then optimization-based design is performed on these models
    corecore