Reinforcement Learning for Climate Change Simulations: A DQN-Based Approach

Authors

DOI:

https://doi.org/10.22555/pjets.v13i2.1421

Keywords:

Reinforcement Learning, Deep Q-Network, Climate Simulations, CMIP6 SSP1-2.6, Climate Interventions

Abstract

This research presents a novel reinforcement learning paradigm through application of a Deep Q-Network (DQN) to discover the optimal policies for adaptation to and mitigation of climate change using CMIP6 SSP1-2.6 scenario data. This approach addresses some of the limitations of traditional General Circulation Models (GCMs), which are high on computational resources but can't be adapted to real-time scenarios. The 1032 timesteps are produced by the DQN framework, focusing on surface air temperature (tas), vertical velocity (WAP), and precipitation (pr). After 50 episodes, the cumulative reward of the DQN-optimized actions (carbon capture, reforestation, or inaction) was -12281.33, which represents a 55.6% improvement over the baseline of -27666.14 with a statistically significant improvement (t = 45.72, p < 0.0001). Partial success was indicated by the DQN stabilizing surface air temperature (tas) varying between 6-8°C (mean deviation is 5.9604°C away from the 1.5°C target). Further refinement of the system is envisaged to bring the target closer to 1.5°C. When compared to GCMs, the DQN was computationally efficient, overall training being accomplished in 11.24 minutes. Further improvement is envisaged to include the incorporation of CO? concentrations, sea-level rise, and data integration in real time. This work should thus show the great promise of RL, along with a fair share of insights to support high-value stakeholder governance and sustainable climate-based policy-making.

References

Downloads

Published

2025-12-31

How to Cite

Reinforcement Learning for Climate Change Simulations: A DQN-Based Approach. (2025). Pakistan Journal of Engineering, Technology and Science, 13(2), 107-115. https://doi.org/10.22555/pjets.v13i2.1421

Similar Articles

21-30 of 39

You may also start an advanced similarity search for this article.