46 research outputs found
Adaptive Horizon Model Predictive Control and Al'brekht's Method
A standard way of finding a feedback law that stabilizes a control system to
an operating point is to recast the problem as an infinite horizon optimal
control problem. If the optimal cost and the optmal feedback can be found on a
large domain around the operating point then a Lyapunov argument can be used to
verify the asymptotic stability of the closed loop dynamics. The problem with
this approach is that is usually very difficult to find the optimal cost and
the optmal feedback on a large domain for nonlinear problems with or without
constraints. Hence the increasing interest in Model Predictive Control (MPC).
In standard MPC a finite horizon optimal control problem is solved in real time
but just at the current state, the first control action is implimented, the
system evolves one time step and the process is repeated. A terminal cost and
terminal feedback found by Al'brekht's methoddefined in a neighborhood of the
operating point is used to shorten the horizon and thereby make the nonlinear
programs easier to solve because they have less decision variables. Adaptive
Horizon Model Predictive Control (AHMPC) is a scheme for varying the horizon
length of Model Predictive Control (MPC) as needed. Its goal is to achieve
stabilization with horizons as small as possible so that MPC methods can be
used on faster and/or more complicated dynamic processes.Comment: arXiv admin note: text overlap with arXiv:1602.0861
A particle swarm optimization approach using adaptive entropy-based fitness quantification of expert knowledge for high-level, real-time cognitive robotic control
Abstract: High-level, real-time mission control of semi-autonomous robots, deployed in remote and dynamic environments, remains a challenge. Control models, learnt from a knowledgebase, quickly become obsolete when the environment or the knowledgebase changes. This research study introduces a cognitive reasoning process, to select the optimal action, using the most relevant knowledge from the knowledgebase, subject to observed evidence. The approach in this study introduces an adaptive entropy-based set-based particle swarm algorithm (AE-SPSO) and a novel, adaptive entropy-based fitness quantification (AEFQ) algorithm for evidence-based optimization of the knowledge. The performance of the AE-SPSO and AEFQ algorithms are experimentally evaluated with two unmanned aerial vehicle (UAV) benchmark missions: (1) relocating the UAV to a charging station and (2) collecting and delivering a package. Performance is measured by inspecting the success and completeness of the mission and the accuracy of autonomous flight control. The results show that the AE-SPSO/AEFQ approach successfully finds the optimal state-transition for each mission task and that autonomous flight control is successfully achieved
Construction and Modelling of an Inducible Positive Feedback Loop Stably Integrated in a Mammalian Cell-Line
Understanding the relationship between topology and dynamics of transcriptional regulatory networks in mammalian cells is essential to elucidate the biology of complex regulatory and signaling pathways. Here, we characterised, via a synthetic biology approach, a transcriptional positive feedback loop (PFL) by generating a clonal population of mammalian cells (CHO) carrying a stable integration of the construct. The PFL network consists of the Tetracycline-controlled transactivator (tTA), whose expression is regulated by a tTA responsive promoter (CMV-TET), thus giving rise to a positive feedback. The same CMV-TET promoter drives also the expression of a destabilised yellow fluorescent protein (d2EYFP), thus the dynamic behaviour can be followed by time-lapse microscopy. The PFL network was compared to an engineered version of the network lacking the positive feedback loop (NOPFL), by expressing the tTA mRNA from a constitutive promoter. Doxycycline was used to repress tTA activation (switch off), and the resulting changes in fluorescence intensity for both the PFL and NOPFL networks were followed for up to 43 h. We observed a striking difference in the dynamics of the PFL and NOPFL networks. Using non-linear dynamical models, able to recapitulate experimental observations, we demonstrated a link between network topology and network dynamics. Namely, transcriptional positive autoregulation can significantly slow down the âswitch offâ times, as comparared to the nonautoregulatated system. Doxycycline concentration can modulate the response times of the PFL, whereas the NOPFL always switches off with the same dynamics. Moreover, the PFL can exhibit bistability for a range of Doxycycline concentrations. Since the PFL motif is often found in naturally occurring transcriptional and signaling pathways, we believe our work can be instrumental to characterise their behaviour
Observer-based control
An observer-based controller is a dynamic feedback controller with a two-stage structure. First, the controller generates an estimate of the state variable of the system to be controlled, using the measured output and known input of the system. This estimate is generated by a state observer for the system. Next, the state estimate is treated as if it were equal to the exact state of the system, and it is used by a static state feedback controller. Dynamic feedback controllers with this two-stage structure appear in various control synthesis problems for linear systems. In this entry, we explain observer-based control in the context of internal stabilization by dynamic measurement feedback