3 research outputs found

    Physics-constrained Deep Learning of Multi-zone Building Thermal Dynamics

    Full text link
    We present a physics-constrained control-oriented deep learning method for modeling building thermal dynamics. The proposed method is based on the systematic encoding of physics-based prior knowledge into a structured recurrent neural architecture. Specifically, our method incorporates structural priors from traditional physics-based building modeling into the neural network thermal dynamics model structure. Further, we leverage penalty methods to provide inequality constraints, thereby bounding predictions within physically realistic and safe operating ranges. Observing that stable eigenvalues accurately characterize the dissipativeness of the system, we additionally use a constrained matrix parameterization based on the Perron-Frobenius theorem to bound the dominant eigenvalues of the building thermal model parameter matrices. We demonstrate the proposed data-driven modeling approach's effectiveness and physical interpretability on a dataset obtained from a real-world office building with 20 thermal zones. Using only 10 days' measurements for training, we demonstrate generalization over 20 consecutive days, significantly improving the accuracy compared to prior state-of-the-art results reported in the literature

    Deep Learning Alternative to Explicit Model Predictive Control for Unknown Nonlinear Systems

    Full text link
    We present differentiable predictive control (DPC) as a deep learning-based alternative to the explicit model predictive control (MPC) for unknown nonlinear systems. In the DPC framework, a neural state-space model is learned from time-series measurements of the system dynamics. The neural control policy is then optimized via stochastic gradient descent approach by differentiating the MPC loss function through the closed-loop system dynamics model. The proposed DPC method learns model-based control policies with state and input constraints, while supporting time-varying references and constraints. In embedded implementation using a Raspberry-Pi platform, we experimentally demonstrate that it is possible to train constrained control policies purely based on the measurements of the unknown nonlinear system. We compare the control performance of the DPC method against explicit MPC and report efficiency gains in online computational demands, memory requirements, policy complexity, and construction time. In particular, we show that our method scales linearly compared to exponential scalability of the explicit MPC solved via multiparametric programming

    Constraint Learning for Control Tasks with Limited Duration Barrier Functions

    Full text link
    When deploying autonomous agents in unstructured environments over sustained periods of time, adaptability and robustness oftentimes outweigh optimality as a primary consideration. In other words, safety and survivability constraints play a key role and in this paper, we present a novel, constraint-learning framework for control tasks built on the idea of constraints-driven control. However, since control policies that keep a dynamical agent within state constraints over infinite horizons are not always available, this work instead considers constraints that can be satisfied over a sufficiently long time horizon T > 0, which we refer to as limited-duration safety. Consequently, value function learning can be used as a tool to help us find limited-duration safe policies. We show that, in some applications, the existence of limited-duration safe policies is actually sufficient for long-duration autonomy. This idea is illustrated on a swarm of simulated robots that are tasked with covering a given area, but that sporadically need to abandon this task to charge batteries. We show how the battery-charging behavior naturally emerges as a result of the constraints. Additionally, using a cart-pole simulation environment, we show how a control policy can be efficiently transferred from the source task, balancing the pole, to the target task, moving the cart to one direction without letting the pole fall down
    corecore