30,002 research outputs found

    Unconstrained receding-horizon control of nonlinear systems

    Get PDF
    It is well known that unconstrained infinite-horizon optimal control may be used to construct a stabilizing controller for a nonlinear system. We show that similar stabilization results may be achieved using unconstrained finite horizon optimal control. The key idea is to approximate the tail of the infinite horizon cost-to-go using, as terminal cost, an appropriate control Lyapunov function. Roughly speaking, the terminal control Lyapunov function (CLF) should provide an (incremental) upper bound on the cost. In this fashion, important stability characteristics may be retained without the use of terminal constraints such as those employed by a number of other researchers. The absence of constraints allows a significant speedup in computation. Furthermore, it is shown that in order to guarantee stability, it suffices to satisfy an improvement property, thereby relaxing the requirement that truly optimal trajectories be found. We provide a complete analysis of the stability and region of attraction/operation properties of receding horizon control strategies that utilize finite horizon approximations in the proposed class. It is shown that the guaranteed region of operation contains that of the CLF controller and may be made as large as desired by increasing the optimization horizon (restricted, of course, to the infinite horizon domain). Moreover, it is easily seen that both CLF and infinite-horizon optimal control approaches are limiting cases of our receding horizon strategy. The key results are illustrated using a familiar example, the inverted pendulum, where significant improvements in guaranteed region of operation and cost are noted

    Iterative learning control: algorithm development and experimental benchmarking

    No full text
    This thesis concerns the general area of experimental benchmarking of Iterative Learning Control (ILC) algorithms using two experimental facilities. ILC is an approach which is suitable for applications where the same task is executed repeatedly over the necessarily finite time duration, known as the trial length. The process is reset prior to the commencement of each execution. The basic idea of ILC is to use information from previously executed trials to update the control input to be applied during the next one. The first experimental facility is a non-minimum phase electro-mechanical system and the other is a gantry robot whose basic task is to pick and place objects on a moving conveyor under synchronization and in a fixed finite time duration that replicates many tasks encountered in the process industries. Novel contributions are made in both the development of new algorithms and, especially, in the analysis of experimental results both of a single algorithm alone and also in the comparison of the relative performance of different algorithms. In the case of non-minimum phase systems, a new algorithm, named Reference Shift ILC (RSILC) is developed that is of a two loop structure. One learning loop addresses the system lag and another tackles the possibility of a large initial plant input commonly encountered when using basic iterative learning control algorithms. After basic algorithm development and simulation studies, experimental results are given to conclude that performance improvement over previously reported algorithms is reasonable. The gantry robot has been previously used to experimentally benchmark a range of simple structure ILC algorithms, such as those based on the ILC versions of the classical proportional plus derivative error actuated controllers, and some state-space based optimal ILC algorithms. Here these results are extended by the first ever detailed experimental study of the performance of stochastic ILC algorithms together with some modifications necessary to their configuration in order to increase performance. The majority of the currently reported ILC algorithms mainly focus on reducing the trial-to-trial error but it is known that this may come at the cost of poor or unacceptable performance along the trial dynamics. Control theory for discrete linear repetitive processes is used to design ILC control laws that enable the control of both trial-to-trial error convergence and along the trial dynamics. These algorithms can be computed using Linear Matrix Inequalities (LMIs) and again the results of experimental implementation on the gantry robot are given. These results are the first ever in this key area and represent a benchmark against which alternatives can be compared. In the concluding chapter, a critical overview of the results presented is given together with areas for both short and medium term further researc

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    Autoregressive time series prediction by means of fuzzy inference systems using nonparametric residual variance estimation

    Get PDF
    We propose an automatic methodology framework for short- and long-term prediction of time series by means of fuzzy inference systems. In this methodology, fuzzy techniques and statistical techniques for nonparametric residual variance estimation are combined in order to build autoregressive predictive models implemented as fuzzy inference systems. Nonparametric residual variance estimation plays a key role in driving the identification and learning procedures. Concrete criteria and procedures within the proposed methodology framework are applied to a number of time series prediction problems. The learn from examples method introduced by Wang and Mendel (W&M) is used for identification. The Levenberg–Marquardt (L–M) optimization method is then applied for tuning. The W&M method produces compact and potentially accurate inference systems when applied after a proper variable selection stage. The L–M method yields the best compromise between accuracy and interpretability of results, among a set of alternatives. Delta test based residual variance estimations are used in order to select the best subset of inputs to the fuzzy inference systems as well as the number of linguistic labels for the inputs. Experiments on a diverse set of time series prediction benchmarks are compared against least-squares support vector machines (LS-SVM), optimally pruned extreme learning machine (OP-ELM), and k-NN based autoregressors. The advantages of the proposed methodology are shown in terms of linguistic interpretability, generalization capability and computational cost. Furthermore, fuzzy models are shown to be consistently more accurate for prediction in the case of time series coming from real-world applications.Ministerio de Ciencia e Innovación TEC2008-04920Junta de Andalucía P08-TIC-03674, IAC07-I-0205:33080, IAC08-II-3347:5626

    Senses, brain and spaces workshop

    Get PDF

    GPU-Accelerated Algorithms for Compressed Signals Recovery with Application to Astronomical Imagery Deblurring

    Get PDF
    Compressive sensing promises to enable bandwidth-efficient on-board compression of astronomical data by lifting the encoding complexity from the source to the receiver. The signal is recovered off-line, exploiting GPUs parallel computation capabilities to speedup the reconstruction process. However, inherent GPU hardware constraints limit the size of the recoverable signal and the speedup practically achievable. In this work, we design parallel algorithms that exploit the properties of circulant matrices for efficient GPU-accelerated sparse signals recovery. Our approach reduces the memory requirements, allowing us to recover very large signals with limited memory. In addition, it achieves a tenfold signal recovery speedup thanks to ad-hoc parallelization of matrix-vector multiplications and matrix inversions. Finally, we practically demonstrate our algorithms in a typical application of circulant matrices: deblurring a sparse astronomical image in the compressed domain
    corecore