44 research outputs found

    Explicit Model Predictive Control of a Magnetic Flexible Endoscope

    Get PDF
    In this letter, explicit model predictive control is applied in conjunction with nonlinear optimization to a magnetically actuated flexible endoscope for the first time. The approach is aimed at computing the motion of the external permanent magnet, given the desired forces and torques. The strategy described here takes advantage of the nonlinear nature of the magnetic actuation and explicitly considers the workspace boundaries, as well as the actuation constraints. Initially, a simplified dynamic model of the tethered capsule, based on the Euler-Lagrange equations is developed. Subsequently, the explicit model predictive control is described and a novel approach for the external magnet positioning, based on a single step, nonlinear optimisation routine, is proposed. Finally, the strategy is implemented on the experimental platform, where bench-top trials are performed on a realistic colon phantom, showing the effectiveness of the technique. The work presented here constitutes an initial exploration for model-based control techniques applied to magnetically manipulated payloads; the techniques described here may be applied to a wide range of devices, including flexible endoscopes and wireless capsules. To our knowledge, this is the first example of advanced closed-loop control of magnetic capsules

    Tradespace and Affordability – Phase 2

    Get PDF
    MOTIVATION AND CONTEXT: One of the key elements of the SERC’s research strategy is transforming the practice of systems engineering – “SE Transformation.” The Grand Challenge goal for SE Transformation is to transform the DoD community’s current systems engineering and management methods, processes, and tools (MPTs) and practices away from sequential, single stovepipe system, hardware-first, outside-in, document-driven, point-solution, acquisition-oriented approaches; and toward concurrent, portfolio and enterprise-oriented, hardware-software-human engineered, balanced outside-in and inside-out, model-driven, set-based, full life cycle approaches.This material is based upon work supported, in whole or in part, by the U.S. Department of Defense through the Office of the Assistant Secretary of Defense for Research and Engineering (ASD(R&E)) under Contract H98230-08- D-0171 (Task Order 0031, RT 046).This material is based upon work supported, in whole or in part, by the U.S. Department of Defense through the Office of the Assistant Secretary of Defense for Research and Engineering (ASD(R&E)) under Contract H98230-08- D-0171 (Task Order 0031, RT 046)

    Risk Management using Model Predictive Control

    Get PDF
    Forward planning and risk management are crucial for the success of any system or business dealing with the uncertainties of the real world. Previous approaches have largely assumed that the future will be similar to the past, or used simple forecasting techniques based on ad-hoc models. Improving solutions requires better projection of future events, and necessitates robust forward planning techniques that consider forecasting inaccuracies. This work advocates risk management through optimal control theory, and proposes several techniques to combine it with time-series forecasting. Focusing on applications in foreign exchange (FX) and battery energy storage systems (BESS), the contributions of this thesis are three-fold. First, a short-term risk management system for FX dealers is formulated as a stochastic model predictive control (SMPC) problem in which the optimal risk-cost profiles are obtained through dynamic control of the dealers’ positions on the spot market. Second, grammatical evolution (GE) is used to automate non-linear time-series model selection, validation, and forecasting. Third, a novel measure for evaluating forecasting models, as a part of the predictive model in finite horizon optimal control applications, is proposed. Using both synthetic and historical data, the proposed techniques were validated and benchmarked. It was shown that the stochastic FX risk management system exhibits better risk management on a risk-cost Pareto frontier compared to rule-based hedging strategies, with up to 44.7% lower cost for the same level of risk. Similarly, for a real-world BESS application, it was demonstrated that the GE optimised forecasting models outperformed other prediction models by at least 9%, improving the overall peak shaving capacity of the system to 57.6%

    Separation of distributed coordination and control for programming reliable robotics

    Get PDF
    A robot's code needs to sense the environment, control the hardware, and communicate with other robots. Current programming languages do not provide the necessary hardware platform-independent abstractions, and therefore, developing robot applications require detailed knowledge of signal processing, control, path planning, network protocols, and various platform-specific details. Further, porting applications across hardware platforms becomes tedious. With the aim of separating these hardware dependent and independent concerns, we have developed Koord: a domain specific language for distributed robotics. Koord abstracts platform-specific functions for sensing, communication, and low-level control. Koord makes the platform-independent control and coordination code portable and modularly verifiable. It raises the level of abstraction in programming by providing distributed shared memory for coordination and port interfaces for sensing and control. We have developed the formal executable semantics of Koord in the K framework. With this symbolic execution engine, we can identify proof obligations for gaining high assurance from Koord applications. Koord is deployed on CyPhyHouse---a toolchain that aims to provide programming, debugging, and deployment benefits for distributed mobile robotic applications. The modular, platform-independent middleware of CyPhyHouse implements these functionalities using standard algorithms for path planning (RRT), control (MPC), mutual exclusion, etc. A high-fidelity, scalable, multi-threaded simulator for Koord applications is developed to simulate the same application code for dozens of heterogeneous agents. The same compiled code can also be deployed on heterogeneous mobile platforms. This thesis outlines the design, implementation and formalization of the Koord language and the main components of CyPhyHouse that it is deployed on

    Risk-aware and Robust Approaches for Machine Learning-supported Model Predictive Control for Iterative Processes

    Get PDF
    The recent advances in machine learning have catalyzed a renewed interest in machine-learning-supported model predictive control. Machine learning promises to facilitate modeling and improve the process' performance. Nevertheless, it brings some challenges: For instance, as the connection with physics law is (partially) lost, machine learning models can provide wildly inaccurate results. It is therefore necessary to provide control methods that take the model uncertainty of these models into account. Uncertainties are even more important for iterative processes - processes that do not operate at a steady state - due to the large changes in the process conditions during operation. In this work, two methods for data-driven uncertainty modelling are proposed. The first method uses Gaussian processes to learn the model uncertainty and neural networks to learn the nominal model. It provides an simple way to summarize the uncertainty of the model into a single parameter, which can be used by a model predictive controller to make risk-aware decisions. This method, while being simple, does not guarantee constraint satisfaction. The second method is based on tube-based model predictive control and can guarantee constraint satisfaction. It is based on the concept of the "safe set": a set where a tube-based MPC has a feasible solution. We show that, under some assumptions, the safe set enlarges at every iteration of the process, potentially allowing increased performance. Finally, a novel Python library for machine-learning-based model predictive control, called HILO-MPC, is presented. This library interfaces with TensorFlow and PyTorch and provides easily-accesible tools for defining control and estimation problem using machine learning model

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book

    Advanced control systems for fast orbit feedback of synchrotron electron beams

    Get PDF
    Diamond Light Source is the UK’s national synchrotron facility that produces synchrotron radiation for research. At source points of synchrotron radiation, the electron beam stability relative to the beam size is critical for the optimal performance of synchrotrons. The current requirement at Diamond is that variations in the beam position should not exceed 10% of the beam size for frequencies up to 140Hz. This is guaranteed by the fast orbit feedback that actuates hundreds of corrector magnets at a sampling rate of 10kHz to reduce beam vibrations down to sub-micron levels. For the next-generation upgrade, Diamond-II, the beam stability requirements will be raised to 3% up to 1kHz. Consequently, the sampling rate will be increased to 100kHz and an additional array of fast correctors will be introduced, which precludes the use of the existing controller. This thesis develops two different control approaches to accommodate the additional array of fast correctors at Diamond-II: internal model control based on the generalised singular value decomposition (GSVD) and model predictive control (MPC). In contrast to existing controllers, the proposed approaches treat the control problem as a whole and consider both arrays simultaneously. To achieve the sampling rate of 100kHz, this thesis proposes to reduce the computational complexity of the controllers in several ways, such as by exploiting symmetries of the magnetic lattice. To validate the controllers for Diamond-II, a real-time control system is implemented on high-performance hardware and integrated in the existing synchrotron. As a first-of-its-kind application to electron beam stabilisation in synchrotrons, this thesis presents real-world results from both MPC and GSVD-based controllers, demonstrating that the proposed approaches meet theoretical expectations with respect to performance and robustness in practice. The results from this thesis, and in particular the novel GSVD-based method, were successfully adopted for the Diamond-II upgrade. This may enable the use of more advanced control systems in similar large-scale and high-speed applications in the future

    Innovative solar energy technologies and control algorithms for enhancing demand-side management in buildings

    Get PDF
    The present thesis investigates innovative energy technologies and control algorithms for enhancing demand-side management in buildings. The work focuses on an innovative low-temperature solar thermal system for supplying space heating demand of buildings. This technology is used as a case study to explore possible solutions to fulfil the mismatch between energy production and its exploitation in building. This shortcoming represents the primary issue of renewable energy sources. Technologies enhancing the energy storage capacity and active demand-side management or demand-response strategies must be implemented in buildings. For these purposes, it is possible to employ hardware or software solutions. The hardware solutions for thermal demand response of buildings are those technologies that allow the energy loads to be permanently shifted or mitigated. The software solutions for demand response are those that integrate an intelligent supervisory layer in the building automation (or management) systems. The present thesis approaches the problem from both the hardware technologies side and the software solutions side. This approach enables the mutual relationships and interactions between the strategies to be appropriately measured. The thesis can be roughly divided in two parts. The first part of the thesis focuses on an innovative solar thermal system exploiting a novel heat transfer fluid and storage media based on micro-encapsulated Phase Change Material slurry. This material leads the system to enhance latent heat exchange processes and increasing the overall performance. The features of Phase Change Material slurry are investigated experimentally and theoretically. A full-scale prototype of this innovative solar system enhancing latent heat exchange is conceived, designed and realised. An experimental campaign on the prototype is used to calibrate and validate a numerical model of the solar thermal system. This model is developed in this thesis to define the thermo-energetic behaviour of the technology. It consists of two mathematical sub-models able to describe the power/energy balances of the flat-plate solar thermal collector and the thermal energy storage unit respectively. In closed-loop configuration, all the Key Performance Indicators used to assess the reliability of the model indicate an excellent comparison between the system monitored outputs and simulation results. Simulation are performed both varying parametrically the boundary condition and investigating the long-term system performance in different climatic locations. Compared to a traditional water-based system used as a reference baseline, the simulation results show that the innovative system could improve the production of useful heat up to 7 % throughout the year and 19 % during the heating season. Once the hardware technology has been defined, the implementation of an innovative control method is necessary to enhance the operational efficiency of the system. This is the primary focus of the second part of the thesis. A specific solution is considered particularly promising for this purpose: the adoption of Model Predictive Control (MPC) formulations for improving the system thermal and energy management. Firstly, this thesis provides a robust and complete framework of the steps required to define an MPC problem for building processes regulation correctly. This goal is reached employing an extended review of the scientific literature and practical application concerning MPC application for building management. Secondly, an MPC algorithm is formulated to regulate the full-scale solar thermal prototype. A testbed virtual environment is developed to perform closed-loop simulations. The existing rule-based control logic is employed as the reference baseline. Compared to the baseline, the MPC algorithm produces energy savings up to 19.2 % with lower unmet energy demand

    Anisotropic galaxy clustering measurements in Fourier space and cosmological implications from the BOSS DR12 sample

    Get PDF
    Moderne Rotverschiebungs-Galaxiendurchmusterungen können mittels Mehrfach-Faser-Spektroskopie große Bereiche des Himmels abdecken. Dank der immer grĂ¶ĂŸer werdenden DatensĂ€tze hat sich die Analyse der großskaligen Galaxienverteilung im Universum zu einer unschĂ€tzbaren Wissensquelle fĂŒr die Kosmologie entwickelt. Zusammen mit den Beobachtungen des kosmischen Mikrowellenhintergrunds (MWH) und Entfernungsbestimmungen anhand von großen Typ-Ia-Supernova-DatensĂ€tzen (SN) bilden die Galaxiendurchmusterungen ausschlaggebende Indikatoren fĂŒr die Korrektheit der Paradigmen des kosmologischen Weltbilds, des ΛCDM-Modells. Die Auswertung der Galaxienverteilung erlaubt mit Hilfe des Standardlineals, das durch die Baryonisch-akustischen Oszillationen gegeben ist, Entfernungsmessungen von ungesehener PrĂ€zision. Dies gewĂ€hrt Einblick in die zugrundeliegende physikalische Natur der Dunklen Energie (DE), welche fĂŒr die Beschleunigung der Ausdehung unseres Universums verantwortlich gemacht wird, indem die zeitliche Entwicklung der DE-Zustandsgleichung einge- schrĂ€nkt werden kann. Zudem kann aus dem Signal der Verzerrungen im Rotverschiebungsraum die Wachstumsrate von kosmologischer Struktur bestimmt werden. Dies stellt einen Test der RelativitĂ€tstheorie dar, weil mögliche erweiterte Gravitationstheorien abweichende Wachstumsraten vorhersagen können. Die abgeschlossenen Rotverschiebungsmessungen des ‘Baryon Acoustic Oscillation Survey’-Programms (kurz BOSS) brachten einen Galaxienkatalog hervor, der ein bisher unerreichtes Volumen abdeckt. In dieser Dissertation wird die kosmologische Information, die im rĂ€umlichen Leistungsdichtespektrum (LDS) der Rotverschiebungsraum-Galaxienverteilung des BOSS-Katalogs enthalten ist, genutzt, um den Parameterraum des ΛCDM-Modells und der wichtigsten möglichen Erweiterungen einzuschrĂ€nken. Vorherige Analysen des anisotropen Galaxien-LDS waren auf die Messung der Multipolzerlegung beschrĂ€nkt. FĂŒr die hier prĂ€sentierte Analyse wurde das Konzept der sogenannten ‘Clustering Wedges’ auf den Fourierraum ĂŒbertragen, um einen komplementĂ€ren Ansatz zur Vermessung des anisotropen LDS zu verfolgen. Dazu wird der varianzoptimierte SchĂ€tzer fĂŒr LDS-Wedges definiert und an die Galaxiengewichtung, die unvermeidbare Beobachtungsfehler im BOSS-Katalog behebt, angepasst. Zudem wird auch der Formalismus zur Beschreibung der Fensterfunktion auf die Wedges erweitert. Das verwendete Modell fĂŒr das anistrope Galaxien-LDS ist auf neuartigen AnsĂ€tzen zur Modellierung der nichtlinearen Gravitationsdynamik und der Verzerrungen im Rotverschiebungsraum aufgebaut, welche die Genauigkeit der Modellvorhersagen speziell im Übergang in den nichtlinearen Bereich signifikant verbessern. Daher kann das LDS bis zu kleineren Skalen als in vorherigen Analysen ausgewertet werden, wodurch engere EinschrĂ€nkungen des kosmologischen Parameterraums erreicht werden. Die Modellierung wurde mit Hilfe von synthetischen Katalogen, die auf großvolumigen Mehrkörpersimulationen basieren, verifiziert. Dazu ist eine theoretische Vorhersage der Kovarianzmatrix der anisotropischen Vermessung der Galaxienverteilung nötig, wofĂŒr ein Gaußsches Vorhersagemodell entwickelt wurde. Dieses ist neben den Wedges auch fĂŒr die komplementĂ€re Multipolzerlegung sowohl des LDS als auch dessen Fouriertransformierten, der Zwei-Punkt-Korrelationsfunktion, anwendbar. Die LDS-Analyse anhand von Clustering Wedges, wie in dieser Arbeit prĂ€sentiert, ist Teil der kombinierten Analyse des finalen Galaxienkatalogs im Rahmen der BOSS-Kollaboration. Unter Verwendung von zwei sich nicht ĂŒberschneidenden Rotverschiebungsbereichen wird die Winkeldurchmesserentfernung zu D_M(z_eff = 0.38) (rfid_d / r_d) = 1525 +-24 h^-1 Mpc und D_M(z_eff = 0.61) (rfid_d / r_d) = 2281 +42 -43 h^-1 Mpc bestimmt. Weiterhin wird der Hubbleparameter zu H(z_eff = 0.38) (r_d / rfid_d) = 81.2 +2.2 −2.3 km s^-1 Mpc^-1 und H(z_eff = 0.61) (r_d / rfid_d) = 94.9 +-2.5 km s^-1 Mpc^-1 vermessen (alle hier angegebenen Bereiche entsprechen einem Konfidenzintervall von 68%). Die Wachstumsrate wird eingeschrĂ€nkt auf fσ_8 (z_eff = 0.38) = 0.498 +0.044 -0.045 und fσ_8 (z_eff = 0.61) = 0.409 +-0.040. Zusammen mit den Ergebnissen der komplementĂ€ren Methoden, die innerhalb der BOSS-Kollaboration zur Clustering-Analyse des finalen Galaxienkatalogs eingesetzt werden, werden diese Resultate zu einem abschließenden Konsensergebnis zusammengefasst. Nur mit den Clustering-Weges-Messungen im Fourierraum, kombiniert mit MWH- und SN-Daten, kann der Materiedichteparameter auf Ω_M = 0.311 +0.009 -0.010 und die Hubble-Konstante auf H_0 = 67.6 +0.7 -0.6 km s^-1 Mpc^−1 unter Annahme des ΛCDM-Modells eingeschrĂ€nken werden. Wird ein Nichtstandard-Modell fĂŒr DE angenommen, so ergibt sich ein DE-Zustandsgleichungsparameter von w_DE = 1.019 +0.048 -0.039. Modifikationen der Wachstumsrate, parametrisiert durch f(z) = [Ω_M(z)]^Îł, werden auf Îł = 0.52 +- 0.10 eingeschrĂ€nkt. Diese beiden Messungen sind in perfekter Übereinstimmung mit den Vorhersagen des ΛCDM-Modells, ebenso wie weitere Ergebnisse, die sich unter der Annahme eines noch großzĂŒgigeren DE-Modells (welches eine zeitliche Entwicklung von w_DE erlaubt) ergeben. Daher wird das ΛCDM-Modell durch die hier beschriebene Analyse weiter gefestigt. Die Summe der Neutrinomassen wird zu sum(m_Îœ) < 0.143 eV bestimmt. Dieses obere Limit befindet sich nicht weit entfernt von der unteren Schranke, die sich aus Teilchenphysik-Experimenten ergibt. Somit ist zu erwarten, dass die kosmologische Signatur, die massebehaftete Neutrinos in der großskaligen Struktur des Universums hinterlassen, in naher Zukunft detektiert werden kann.Galaxy surveys cover a large fraction of the celestial sphere using modern multi-fibre spectrographs. Thanks to ever increasing datasets, the analysis of the large-scale structure (LSS) of the Universe has become a prolific source of cosmological information. Together with the observations of the cosmic microwave background (CMB) and samples of supernova (SN) of type Ia, they helped to establish the standard cosmological paradigm, the ΛCDM model. From the analysis of redshift-space galaxy clustering, the expansion history of the Universe can be inferred using the feature of Baryon Acoustic Oscillations (BAO) as a standard ruler to measure cosmic distances. The growth rate of cosmic structure can also be determined using redshift-space distortions (RSD). These measurements provide insight into competing alternatives of the ΛCDM model. The nature of the Dark Energy (DE), a strange component that is believed to be responsible for the current phase of accelerating expansion of the Universe, can be unravelled from BAO measurements of the late-time expansion. Modified theories of gravity can be constrained from the growth rate extracted from RSD, which can deviate from the prediction of general relativity. The redshift measurements of the Baryon Acoustic Oscillation Survey (BOSS) program that was completed in 2014 yielded a galaxy sample that covers an unprecedented volume. In this thesis, the standard model and its most important extensions are analysed using the cosmological information in the full-shape of the redshift-space two-point statistics measured from the final BOSS galaxy sample. So far, anisotropic clustering analyses in Fourier space relied on power spectrum multipole measurements. For this work, the concept of clustering wedges was extended to Fourier space to establish a complementary approach to measure clustering anisotropies: we introduce the optimal-variance estimator for clustering wedges, which is designed to account for systematic weights that correct the observational incompleteness of the BOSS sample, and also develop the window function formalism for the wedges. Our modelling of the anisotropic galaxy clustering is based on novel approaches for the description of non-linear gravitational dynamics and redshift-space distortions. This improved modelling allows us to include smaller scales in our full-shape fits than in previous BAO+RSD studies, resulting in tighter cosmological constraints. The galaxy clustering model is verified using synthetic catalogues based on large-volume N -body simulations. As this test requires a theoretical description for the anisotropic clustering covariance matrix, a Gaussian formalism was developed for that purpose. As a side project, this formalism is extended to describe clustering wedges and multipoles in Fourier and configuration space. The Fourier-space clustering measurements presented in this thesis are part of the joint analysis of the final BOSS sample. Using two non-overlapping redshift bins, we measure an angular diameter distance of D_M(z_eff = 0.38) (rfid_d / r_d) = 1525 +-24 h^-1 Mpc and D_M(z_eff = 0.61) (rfid_d / r_d) = 2281 +42 -43 h^-1 Mpc, as well as a Hubble parameter of H(z_eff = 0.38) (r_d / rfid_d) = 81.2 +2.2 −2.3 km s^-1 Mpc^-1 and H(z_eff = 0.61) (r_d / rfid_d) = 94.9 +-2.5 km s^-1 Mpc^-1 (all limits correspond to the statistical error of a confidence level of 68%). The growth rate is constrained to fσ_8 (z_eff = 0.38) = 0.498 +0.044 -0.045 und fσ_8 (z_eff = 0.61) = 0.409 +-0.040. These measurements will be combined with the complementary results from other galaxy clustering methods in configuration and Fourier space in order to determine the final BOSS consensus measurements. From our analysis alone, in combination with CMB and SN Ia data, we obtain a matter density parameter of Ω_M = 0.311 +0.009 -0.010 and a local Hubble parameter of H_0 = 67.6 +0.7 -0.6 km s^-1 Mpc^−1 assuming a ΛCDM cosmology. Allowing for a non-standard DE model, we find an equation-of-state parameter of w_DE = 1.019 +0.048 -0.039. Modifications of the growth rate, parametrized as f(z) = [Ω_M(z)]^Îł, are constrained to Îł = 0.52 ± 0.10. These two results, along with those obtained using a more general DE model to identify a time-evolution of w_DE, are in perfect agreement with the ΛCDM predictions. Thus, the standard paradigm is further consolidated by our analysis. The sum of neutrino masses is found to be sum(m_Îœ) < 0.143 eV. As this limit is close to the lower bound from particle physics, a detection of the cosmological signature of massive neutrinos from LSS analyses can be expected in the near future.Deutsche Übersetzung des Titels: Anisotrope Messungen der Galaxien-HĂ€ufungsverteilung im Fourierraum und kosmologische Implikationen des BOSS-DR12-Galaxiensample
    corecore