Abstract:The scheduling of regional integrated energy systems must fully exploit active regulation capability to cope with new energy fluctuation and diverse load conditions. Traditional methods rely heavily on precise modeling, struggle with high uncertainty, and lack dynamic analysis of active regulation as well as interpretability of scheduling strategies. To address these challenges, this paper proposes an active flexibility regulation rule extraction and explainable reinforcement learning method. First, based on the equipment regulation boundaries, response rates, and coupling relationships, flexibility metrics such as power regulation capacities of electrical and thermal subsystem components, are quantitatively analyzed. Second, a reward function integrating the physical rules of active regulation flexibility is designed and embedded into an improved deep deterministic policy gradient (DDPG) framework. During policy updates, device operation constraints and flexibility incentives are incorporated. Dynamic constraint construction, adaptive learning rate adjustment, and policy visualization are adopted to enhance physical consistency and interpretability of the learning process. Simulation results show that the proposed method improves the regulation capability by 11.08% and 15.86% compared with quadratic programming and particle swarm optimization, respectively. Moreover, the extracted flexibility rules enable interpretable day-ahead regulation capability analysis, providing traceable physical insights and supporting human-AI collaborative decision-making in scheduling strategies.