90,004 research outputs found

    Iterative learning control: algorithm development and experimental benchmarking

    No full text
    This thesis concerns the general area of experimental benchmarking of Iterative Learning Control (ILC) algorithms using two experimental facilities. ILC is an approach which is suitable for applications where the same task is executed repeatedly over the necessarily finite time duration, known as the trial length. The process is reset prior to the commencement of each execution. The basic idea of ILC is to use information from previously executed trials to update the control input to be applied during the next one. The first experimental facility is a non-minimum phase electro-mechanical system and the other is a gantry robot whose basic task is to pick and place objects on a moving conveyor under synchronization and in a fixed finite time duration that replicates many tasks encountered in the process industries. Novel contributions are made in both the development of new algorithms and, especially, in the analysis of experimental results both of a single algorithm alone and also in the comparison of the relative performance of different algorithms. In the case of non-minimum phase systems, a new algorithm, named Reference Shift ILC (RSILC) is developed that is of a two loop structure. One learning loop addresses the system lag and another tackles the possibility of a large initial plant input commonly encountered when using basic iterative learning control algorithms. After basic algorithm development and simulation studies, experimental results are given to conclude that performance improvement over previously reported algorithms is reasonable. The gantry robot has been previously used to experimentally benchmark a range of simple structure ILC algorithms, such as those based on the ILC versions of the classical proportional plus derivative error actuated controllers, and some state-space based optimal ILC algorithms. Here these results are extended by the first ever detailed experimental study of the performance of stochastic ILC algorithms together with some modifications necessary to their configuration in order to increase performance. The majority of the currently reported ILC algorithms mainly focus on reducing the trial-to-trial error but it is known that this may come at the cost of poor or unacceptable performance along the trial dynamics. Control theory for discrete linear repetitive processes is used to design ILC control laws that enable the control of both trial-to-trial error convergence and along the trial dynamics. These algorithms can be computed using Linear Matrix Inequalities (LMIs) and again the results of experimental implementation on the gantry robot are given. These results are the first ever in this key area and represent a benchmark against which alternatives can be compared. In the concluding chapter, a critical overview of the results presented is given together with areas for both short and medium term further researc

    Visual parameter optimisation for biomedical image processing

    Get PDF
    Background: Biomedical image processing methods require users to optimise input parameters to ensure high quality output. This presents two challenges. First, it is difficult to optimise multiple input parameters for multiple input images. Second, it is difficult to achieve an understanding of underlying algorithms, in particular, relationships between input and output. Results: We present a visualisation method that transforms users’ ability to understand algorithm behaviour by integrating input and output, and by supporting exploration of their relationships. We discuss its application to a colour deconvolution technique for stained histology images and show how it enabled a domain expert to identify suitable parameter values for the deconvolution of two types of images, and metrics to quantify deconvolution performance. It also enabled a breakthrough in understanding by invalidating an underlying assumption about the algorithm. Conclusions: The visualisation method presented here provides analysis capability for multiple inputs and outputs in biomedical image processing that is not supported by previous analysis software. The analysis supported by our method is not feasible with conventional trial-and-error approaches

    Enhancing Performance of a Deep Neural Network: A Comparative Analysis of Optimization Algorithms

    Get PDF
    Adopting the most suitable optimization algorithm (optimizer) for a Neural Network Model is among the most important ventures in Deep Learning and all classes of Neural Networks. It's a case of trial and error experimentation. In this paper, we will experiment with seven of the most popular optimization algorithms namely: sgd, rmsprop, adagrad, adadelta, adam, adamax and nadam on four unrelated datasets discretely, to conclude which one dispenses the best accuracy, efficiency and performance to our deep neural network. This work will provide insightful analysis to a data scientist in choosing the best optimizer while modelling their deep neural network

    Parameter Tuning Using Gaussian Processes

    Get PDF
    Most machine learning algorithms require us to set up their parameter values before applying these algorithms to solve problems. Appropriate parameter settings will bring good performance while inappropriate parameter settings generally result in poor modelling. Hence, it is necessary to acquire the “best” parameter values for a particular algorithm before building the model. The “best” model not only reflects the “real” function and is well fitted to existing points, but also gives good performance when making predictions for new points with previously unseen values. A number of methods exist that have been proposed to optimize parameter values. The basic idea of all such methods is a trial-and-error process whereas the work presented in this thesis employs Gaussian process (GP) regression to optimize the parameter values of a given machine learning algorithm. In this thesis, we consider the optimization of only two-parameter learning algorithms. All the possible parameter values are specified in a 2-dimensional grid in this work. To avoid brute-force search, Gaussian Process Optimization (GPO) makes use of “expected improvement” to pick useful points rather than validating every point of the grid step by step. The point with the highest expected improvement is evaluated using cross-validation and the resulting data point is added to the training set for the Gaussian process model. This process is repeated until a stopping criterion is met. The final model is built using the learning algorithm based on the best parameter values identified in this process. In order to test the effectiveness of this optimization method on regression and classification problems, we use it to optimize parameters of some well-known machine learning algorithms, such as decision tree learning, support vector machines and boosting with trees. Through the analysis of experimental results obtained on datasets from the UCI repository, we find that the GPO algorithm yields competitive performance compared with a brute-force approach, while exhibiting a distinct advantage in terms of training time and number of cross-validation runs. Overall, the GPO method is a promising method for the optimization of parameter values in machine learning

    JMASM 55: MATLAB Algorithms and Source Codes of \u27cbnet\u27 Function for Univariate Time Series Modeling with Neural Networks (MATLAB)

    Get PDF
    Artificial Neural Networks (ANN) can be designed as a nonparametric tool for time series modeling. MATLAB serves as a powerful environment for ANN modeling. Although Neural Network Time Series Tool (ntstool) is useful for modeling time series, more detailed functions could be more useful in order to get more detailed and comprehensive analysis results. For these purposes, cbnet function with properties such as input lag generator, step-ahead forecaster, trial-error based network selection strategy, alternative network selection with various performance measure and global repetition feature to obtain more alternative network has been developed, and MATLAB algorithms and source codes has been introduced. A detailed comparison with the ntstool is carried out, showing that the cbnet function covers the shortcomings of ntstool

    Sparse Iterative Learning Control with Application to a Wafer Stage: Achieving Performance, Resource Efficiency, and Task Flexibility

    Get PDF
    Trial-varying disturbances are a key concern in Iterative Learning Control (ILC) and may lead to inefficient and expensive implementations and severe performance deterioration. The aim of this paper is to develop a general framework for optimization-based ILC that allows for enforcing additional structure, including sparsity. The proposed method enforces sparsity in a generalized setting through convex relaxations using 1\ell_1 norms. The proposed ILC framework is applied to the optimization of sampling sequences for resource efficient implementation, trial-varying disturbance attenuation, and basis function selection. The framework has a large potential in control applications such as mechatronics, as is confirmed through an application on a wafer stage.Comment: 12 pages, 14 figure

    Robots that can adapt like animals

    Get PDF
    As robots leave the controlled environments of factories to autonomously function in more complex, natural environments, they will have to respond to the inevitable fact that they will become damaged. However, while animals can quickly adapt to a wide variety of injuries, current robots cannot "think outside the box" to find a compensatory behavior when damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. Here we introduce an intelligent trial and error algorithm that allows robots to adapt to damage in less than two minutes, without requiring self-diagnosis or pre-specified contingency plans. Before deployment, a robot exploits a novel algorithm to create a detailed map of the space of high-performing behaviors: This map represents the robot's intuitions about what behaviors it can perform and their value. If the robot is damaged, it uses these intuitions to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a compensatory behavior that works in spite of the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new technique will enable more robust, effective, autonomous robots, and suggests principles that animals may use to adapt to injury
    corecore