Efficient HVAC Control with Deep Reinforcement Learning and EnergyPlus (Papers Track)
Jared Markowitz (Johns Hopkins University Applied Physics Laboratory); Nathan Drenkow (Johns Hopkins University Applied Physics Laboratory)
Abstract
Heating and cooling comprise a significant fraction of the energy consumed by buildings, which in turn account for a significant fraction of society’s energy use. Most building heating, ventilation, and air conditioning (HVAC) systems use standard control schemes that meet basic operating constraints and comfort requirements but with suboptimal efficiency. Deep reinforcement learning (DRL) has shown immense potential for high-performing control in a variety of simulated settings, but has not been widely deployed for real-world control. Here we provide two contributions toward increasing the viability of real-world, DRL-based HVAC control, leveraging the EnergyPlus building simulator. First, we use the new EnergyPlus Python API to implement a first-of-its-kind, purely Python-based EnergyPlus DRL learning framework capable of generalizing to a wide variety of building configurations and weather scenarios. Second, we demonstrate an approach to constrained learning for this setting, removing the requirement to tune reward functions in order to maximize energy efficiency given temperature constraints. We tested our framework on realistic building models of a data center, an office building, and a secondary school. In each case, trained agents maintained temperature control while achieving energy savings relative to standard approaches.