Safe Reinforcement Learning for Remote Microgrid Optimization with Industrial Constraints (Papers Track)

Hadi Nekoei (Mila); Alexandre Blondin Massé (Hydro-Quebec); Rachid Hassani (Hydro-quebec); Sarath Chandar (Mila / École Polytechnique de Montréal); Vincent Mai (Hydro-Québec)

Paper PDF NeurIPS 2024 Recorded Talk Cite
Power & Energy Reinforcement Learning

Abstract

In remote microgrids, the power must be autonomously dispatched between fuel generators, renewable energy sources, and batteries, to fulfill the demand. These decisions must aim to reduce the consumption of fossil fuel and battery degradation while accounting for the complex dynamics of generators, and uncertainty in the demand and renewable production forecasts. Such an optimization could significantly reduce fuel consumption, potentially saving millions of liters of diesel per year. Traditional optimization techniques struggle with scaling in problem complexity and handling uncertainty. On the other hand, reinforcement learning algorithms often lacks the industry constraints guarantees needed for real-world deployment. In this project, we provide a realistic shielded microgrid environment designed to ensure safe control given real-world industry standards. Then, we train a deep reinforcement learning agents to control fuel generators and batteries to minimize the fuel consumption and battery degradation. Our agents outperform heuristics baselines and exhibit a Pareto frontier pattern.