Driven by recent advances and applications in smart grid technologies, our electric power grid is undergoing radical modernization. Microgrids (MG) represent a transformative approach in power delivery, shifting from centralized to distributed systems with localized generation from various sources. However, this decentralization necessitates efficient energy management systems (EMS) to ensure optimal operation, stability, and integration with the main grid. Additionally, using energy storage helps provide electrical energy savings and independence from the local utility. Focusing on storage system management, this paper proposes an advanced approach using a reinforcement learning agent based on Double Deep Q-Networks (DDQN) with Prioritized Experience Replay for storage system management within a microgrid. Unlike traditional model-based approaches that fall short when managing the dynamic nature of microgrids, the proposed solution builds a robust agent tasked with managing the charging and discharging of batteries, taking into account the dynamic interactions of various energy sources and demand. This approach aims to maintain the battery state of charge (SoC) within a healthy range, optimize resource utilization, and ensure continuous power supply. A simulated microgrid environment, developed using Simulink and Python, serves as the testbed for evaluating and demonstrating the agent's effectiveness across various energy management scenarios.

Optimizing Microgrid Energy Management using Reinforcement Double Deep Q-Networks with Prioritized Experience Replay

Messlem, Abdelkader
;
2024

Abstract

Driven by recent advances and applications in smart grid technologies, our electric power grid is undergoing radical modernization. Microgrids (MG) represent a transformative approach in power delivery, shifting from centralized to distributed systems with localized generation from various sources. However, this decentralization necessitates efficient energy management systems (EMS) to ensure optimal operation, stability, and integration with the main grid. Additionally, using energy storage helps provide electrical energy savings and independence from the local utility. Focusing on storage system management, this paper proposes an advanced approach using a reinforcement learning agent based on Double Deep Q-Networks (DDQN) with Prioritized Experience Replay for storage system management within a microgrid. Unlike traditional model-based approaches that fall short when managing the dynamic nature of microgrids, the proposed solution builds a robust agent tasked with managing the charging and discharging of batteries, taking into account the dynamic interactions of various energy sources and demand. This approach aims to maintain the battery state of charge (SoC) within a healthy range, optimize resource utilization, and ensure continuous power supply. A simulated microgrid environment, developed using Simulink and Python, serves as the testbed for evaluating and demonstrating the agent's effectiveness across various energy management scenarios.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4935995
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact