基于改进深度Q网络的异构无人机快速任务分配

Fast task allocation for heterogeneous UAVs employing improved deep Q-network

  • 摘要: 随着无人机技术的快速发展,多无人机系统在执行复杂任务时展现出巨大潜力,高效的任务分配策略对提升多无人机系统的整体性能至关重要. 然而,传统方法如集中式优化、拍卖算法及鸽群算法等,在面对复杂环境干扰时往往难以生成有效的分配策略,为此,本文考虑了环境不确定性如不同风速和降雨量,重点研究了改进的强化学习算法在无人机任务分配中的应用,使多无人机系统能够迅速响应并实现资源的高效利用. 首先,本文将无人机任务分配问题建模为马尔可夫决策过程,通过神经网络进行策略逼近用以任务分配中高效处理高维和复杂的状态空间,同时引入优先经验重放机制,有效降低了在线计算的负担. 仿真结果表明,与其他强化学习方法相比,该算法具有较强的收敛性. 在面对复杂环境时,其鲁棒性更为显著. 此外,该算法在处理不同任务时仅需0.24 s即可完成一组适合的无人机分配,并能够快速生成大规模无人机集群的任务分配方案.

     

    Abstract: The rapid advancement of unmanned aerial vehicle (UAV) technology has underscored the significant potential of multi-UAV systems in managing complex tasks. Efficient task-allocation strategies are crucial for enhancing the overall performance of these systems. Although conventional methods perform adequately in simple environments, they often struggle in more complex scenarios where environmental disturbances and resource constraints hinder their effectiveness, resulting in suboptimal task allocation outcomes. By contrast, reinforcement learning (RL), as a powerful optimization technique, is particularly suitable for addressing the challenges inherent in multi-UAV task allocation. Unlike conventional approaches, RL does not rely on predefined models or external knowledge, enabling the system to learn optimal strategies via continuous interactions with the environment. This flexibility enables the system to adapt to dynamic conditions and improve its decision making over time. This study proposes an innovative approach based on deep reinforcement learning to address the challenges encountered in multi-UAV task allocation, with specific consideration given to the uncertainties typically prevalent in real-world battlefield scenarios. These uncertainties include variable wind conditions, precipitation, and other environmental factors that can potentially affect UAV performance. The primary objective of this study is to ensure that multi-UAV systems can respond rapidly to multiple simultaneous tasks while optimizing resource utilization. Traditional task allocation methods, which are often heuristic or rule-based, lack the flexibility required to handle environmental complexity or dynamic changes. They are typically rigid and struggle to adapt to unanticipated situations, which results in inefficiencies and delays in task allocation. To address these challenges, this study modeled the task allocation problem as a Markov Decision Process. In this framework, the system can select the most appropriate task allocation strategy based on the current state of the environment, ensuring flexibility and timeliness in decision making. To enhance the stability and robustness of the model, an evaluation network and a target network were designed in tandem to ensure reliable learning. By separating the state and advantage values, the model effectively reduces the noise introduced by action selection, resulting in more accurate predictions and enhanced decision making. In addition, this study introduces a prioritized experience replay module that ranks the importance of each experience sample based on its temporal difference error, thereby prioritizing the most useful experiences for learning. This approach enables the model to focus on more informative samples, thereby accelerating the learning process and improving algorithm efficiency. By addressing the inefficiencies of traditional experience replay methods, which often reuse low-value samples, this technique ensures a more efficient use of the available training time. Moreover, this study employed neural network approximation techniques to reduce the computational demands of online learning, which is particularly important in real-time applications with limited processing power. Experimental results demonstrate that the proposed method substantially reduces resource waste in UAV task scheduling. On average, each UAV assignment is completed in just 0.24 s, indicating substantial improvement in task allocation efficiency. The proposed algorithm outperforms traditional methods in efficiency as well as in convergence speed and stability, owing to the prioritized experience replay module. Furthermore, the scalability of the algorithm was validated via simulations involving larger UAV fleets, where performance remained robust without degradation. Additional simulation tests confirmed that the proposed method can optimize resource allocation, reduce system interference, and accelerate convergence. In conclusion, the proposed method offers significant improvements in multi-UAV system task allocation, particularly in terms of task allocation efficiency and system adaptability.

     

/

返回文章
返回