3,420 research outputs found

    Multivariable Iterative Learning Control Design Procedures: from Decentralized to Centralized, Illustrated on an Industrial Printer

    Get PDF
    Iterative Learning Control (ILC) enables high control performance through learning from measured data, using only limited model knowledge in the form of a nominal parametric model. Robust stability requires robustness to modeling errors, often due to deliberate undermodeling. The aim of this paper is to develop a range of approaches for multivariable ILC, where specific attention is given to addressing interaction. The proposed methods either address the interaction in the nominal model, or as uncertainty, i.e., through robust stability. The result is a range of techniques, including the use of the structured singular value (SSV) and Gershgorin bounds, that provide a different trade-off between modeling requirements, i.e., modeling effort and cost, and achievable performance. This allows control engineers to select the approach that fits the modeling budget and control requirements. This trade-off is demonstrated in a case study on an industrial flatbed printer

    Norm Optimal Iterative Learning Control with Application to Problems in Accelerator based Free Electron Lasers and Rehabilitation Robotics

    No full text
    This paper gives an overview of the theoretical basis of the norm optimal approach to iterative learning control followed by results that describe more recent work which has experimentally benchmarking the performance that can be achieved. The remainder of then paper then describes its actual application to a physical process and a very novel application in stroke rehabilitation

    High Performance, Robust Control of Flexible Space Structures: MSFC Center Director's Discretionary Fund

    Get PDF
    Many spacecraft systems have ambitious objectives that place stringent requirements on control systems. Achievable performance is often limited because of difficulty of obtaining accurate models for flexible space structures. To achieve sufficiently high performance to accomplish mission objectives may require the ability to refine the control design model based on closed-loop test data and tune the controller based on the refined model. A control system design procedure is developed based on mixed H2/H(infinity) optimization to synthesize a set of controllers explicitly trading between nominal performance and robust stability. A homotopy algorithm is presented which generates a trajectory of gains that may be implemented to determine maximum achievable performance for a given model error bound. Examples show that a better balance between robustness and performance is obtained using the mixed H2/H(infinity) design method than either H2 or mu-synthesis control design. A second contribution is a new procedure for closed-loop system identification which refines parameters of a control design model in a canonical realization. Examples demonstrate convergence of the parameter estimation and improved performance realized by using the refined model for controller redesign. These developments result in an effective mechanism for achieving high-performance control of flexible space structures

    Adaptive control of large space structures using recursive lattice filters

    Get PDF
    The use of recursive lattice filters for identification and adaptive control of large space structures is studied. Lattice filters were used to identify the structural dynamics model of the flexible structures. This identification model is then used for adaptive control. Before the identified model and control laws are integrated, the identified model is passed through a series of validation procedures and only when the model passes these validation procedures is control engaged. This type of validation scheme prevents instability when the overall loop is closed. Another important area of research, namely that of robust controller synthesis, was investigated using frequency domain multivariable controller synthesis methods. The method uses the Linear Quadratic Guassian/Loop Transfer Recovery (LQG/LTR) approach to ensure stability against unmodeled higher frequency modes and achieves the desired performance

    Machine learning approaches for assessing moderate-to-severe diarrhea in children \u3c 5 years of age, rural western Kenya 2008-2012

    Get PDF
    Worldwide diarrheal disease is a leading cause of morbidity and mortality in children less than five years of age. Incidence and disease severity remain the highest in sub-Saharan Africa. Kenya has an estimated 400,000 severe diarrhea episodes and 9,500 diarrhea-related deaths per year in children. Current statistical methods for estimating etiological and exposure risk factors for moderate-to-severe diarrhea (MSD) in children are constrained by the inability to assess a large number of parameters due to limitations of sample size, complex relationships, correlated predictors, and model assumptions of linearity. This dissertation examines machine learning statistical methods to address weaknesses associated with using traditional logistic regression models. The studies presented here investigate data from a 4-year, prospective, matched case-control study of MSD among children less than five years of age in rural Kenya from the Global Enteric Multicenter Study. The three machine learning approaches were used to examine associations with MSD and include: least absolute shrinkage and selection operator, classification trees, and random forest. A principal finding in all three studies was that machine learning methodological approaches are useful and feasible to implement in epidemiological studies. All provided additional information and understanding of the data beyond using only logistic regression models. The results from all three machine learning approaches were supported by comparable logistic regression results indicating their usefulness as epidemiological tools. This dissertation offers an exploration of methodological alternatives that should be considered more frequently in diarrheal disease epidemiology, and in public health in general

    Software development process of Neotree - a data capture and decision support system to improve newborn healthcare in low-resource settings

    Get PDF
    The global priority of improving neonatal survival could be tackled through the universal implementation of cost-effective maternal and newborn health interventions. Despite 90% of neonatal deaths occurring in low-resource settings, very few evidence-based digital health interventions exist to assist healthcare professionals in clinical decision-making in these settings. To bridge this gap, Neotree was co-developed through an iterative, user-centered design approach in collaboration with healthcare professionals in the UK, Bangladesh, Malawi, and Zimbabwe. It addresses a broad range of neonatal clinical diagnoses and healthcare indicators as opposed to being limited to specific conditions and follows national and international guidelines for newborn care. This digital health intervention includes a mobile application (app) which is designed to be used by healthcare professionals at the bedside. The app enables real-time data capture and provides education in newborn care and clinical decision support via integrated clinical management algorithms. Comprehensive routine patient data are prospectively collected regarding each newborn, as well as maternal data and blood test results, which are used to inform clinical decision making at the bedside. Data dashboards provide healthcare professionals and hospital management a near real-time overview of patient statistics that can be used for healthcare quality improvement purposes. To enable this workflow, the Neotree web editor allows fine-grained customization of the mobile app. The data pipeline manages data flow from the app to secure databases and then to the dashboard. Implemented in three hospitals in two countries so far, Neotree has captured routine data and supported the care of over 21,000 babies and has been used by over 450 healthcare professionals. All code and documentation are open source, allowing adoption and adaptation by clinicians, researchers, and developers

    A New Scalable, Portable, and Memory-Efficient Predictive Analytics Framework for Predicting Time-to-Event Outcomes in Healthcare

    Get PDF
    Time-to-event outcomes are prevalent in medical research. To handle these outcomes, as well as censored observations, statistical and survival regression methods are widely used based on the assumptions of linear association; however, clinicopathological features often exhibit nonlinear correlations. Machine learning (ML) algorithms have been recently adapted to effectively handle nonlinear correlations. One drawback of ML models is that they can model idiosyncratic features of a training dataset. Due to this overlearning, ML models perform well on the training data but are not so striking on test data. The features that we choose indirectly influence the performance of ML prediction models. With the expansion of big data in biomedical informatics, appropriate feature engineering and feature selection are vital to ML success. Also, an ensemble learning algorithm helps decrease bias and variance by combining the predictions of multiple models. In this study, we newly constructed a scalable, portable, and memory-efficient predictive analytics framework, fitting four components (feature engineering, survival analysis, feature selection, and ensemble learning) together. Our framework first employs feature engineering techniques, such as binarization, discretization, transformation, and normalization on raw dataset. The normalized feature set was applied to the Cox survival regression that produces highly correlated features relevant to the outcome.The resultant feature set was deployed to “eXtreme gradient boosting ensemble learning” (XGBoost) and Recursive Feature Elimination algorithms. XGBoost uses a gradient boosting decision tree algorithm in which new models are created sequentially that predict the residuals of prior models, which are then added together to make the final prediction. In our experiments, we analyzed a cohort of cardiac surgery patients drawn from a multi-hospital academic health system. The model evaluated 72 perioperative variables that impact an event of readmission within 30 days of discharge, derived 48 significant features, and demonstrated optimum predictive ability with feature sets ranging from 16 to 24. The area under the receiver operating characteristics observed for the feature set of 16 were 0.8816, and 0.9307 at the 35th, and 151st iteration respectively. Our model showed improved performance compared to state-of-the-art models and could be more useful for decision support in clinical settings

    To develop an efficient variable speed compressor motor system

    Get PDF
    This research presents a proposed new method of improving the energy efficiency of a Variable Speed Drive (VSD) for induction motors. The principles of VSD are reviewed with emphasis on the efficiency and power losses associated with the operation of the variable speed compressor motor drive, particularly at low speed operation.The efficiency of induction motor when operated at rated speed and load torque is high. However at low load operation, application of the induction motor at rated flux will cause the iron losses to increase excessively, hence its efficiency will reduce dramatically. To improve this efficiency, it is essential to obtain the flux level that minimizes the total motor losses. This technique is known as an efficiency or energy optimization control method. In practice, typical of the compressor load does not require high dynamic response, therefore improvement of the efficiency optimization control that is proposed in this research is based on scalar control model.In this research, development of a new neural network controller for efficiency optimization control is proposed. The controller is designed to generate both voltage and frequency reference signals imultaneously. To achieve a robust controller from variation of motor parameters, a real-time or on-line learning algorithm based on a second order optimization Levenberg-Marquardt is employed. The simulation of the proposed controller for variable speed compressor is presented. The results obtained clearly show that the efficiency at low speed is significant increased. Besides that the speed of the motor can be maintained. Furthermore, the controller is also robust to the motor parameters variation. The simulation results are also verified by experiment
    corecore