Abstract:The formulation of emergency control measures for transient voltage instability events is a crucial aspect of power system simulation analysis. Traditionally, emergency load shedding decisions are pre-determined offline and matched for execution in real-time. However, this process heavily relies on expert analysis of massive amounts of simulation data, which is both time-consuming and labor-intensive. To improve the efficiency of offline emergency load shedding decision-making, this paper presents a method for power system emergency load shedding decisions that integrates power grid topology information into a branching dueling Q-network (BDN) agent. First, an event-driven Markov decision process (MDP) is established to effectively guide the training of the deep reinforcement learning agents. Second, a BDN agent is designed, which exhibits superior training efficiency and decision-making capability compared to traditional non-branching networks. Then, to further enhance the agent’s training efficiency and decision-making performance, power grid topology information is integrated into the agent’s training process through graph convolutional networks (GCN). Finally, the proposed method is validated on the 8-machine 36-node system of the China Electric Power Research Institute. Compared to non-branching networks and deep reinforcement learning agents without integrated topology information, the proposed method demonstrates higher training efficiency and better decision-making performance.