116 research outputs found
Centralized and Decentralized Optimal Control of Variable Speed Heat Pumps
Utility service providers are often challenged with the synchronization of thermostatically controlled loads. Load synchronization, as a result of naturally occurring and demand-response events, has the potential to damage power distribution equipment. Because thermostatically controlled loads constitute most of the power consumed by the grid at any given time, the proper control of such devices can lead to significant energy savings and improved grid stability. The contribution of this paper is the development of an optimal control algorithm for commonly used variable speed heat pumps. By means of selective peer-to-peer communication, our control architecture allows for the regulation of home temperatures while simultaneously minimizing aggregate power consumption, and aggregate load volatility. An optimal centralized controller is also explored and compared against its decentralized counterpart
Data-driven control of micro-climate in buildings: an event-triggered reinforcement learning approach
Smart buildings have great potential for shaping an energy-efficient,
sustainable, and more economic future for our planet as buildings account for
approximately 40% of the global energy consumption. Future of the smart
buildings lies in using sensory data for adaptive decision making and control
that is currently gloomed by the key challenge of learning a good control
policy in a short period of time in an online and continuing fashion. To tackle
this challenge, an event-triggered -- as opposed to classic time-triggered --
paradigm, is proposed in which learning and control decisions are made when
events occur and enough information is collected. Events are characterized by
certain design conditions and they occur when the conditions are met, for
instance, when a certain state threshold is reached. By systematically
adjusting the time of learning and control decisions, the proposed framework
can potentially reduce the variance in learning, and consequently, improve the
control process. We formulate the micro-climate control problem based on
semi-Markov decision processes that allow for variable-time state transitions
and decision making. Using extended policy gradient theorems and temporal
difference methods in a reinforcement learning set-up, we propose two learning
algorithms for event-triggered control of micro-climate in buildings. We show
the efficacy of our proposed approach via designing a smart learning thermostat
that simultaneously optimizes energy consumption and occupants' comfort in a
test building
- …